In the past few years, data-intensive technologies have come to play a critical role in corporate strategy and decision-making processes, such as customer insight and marketing, recruitment, risk management and predictive analytics. Datafication has several benefits for corporate due diligence, such as improving decision accuracy, enhancing supply chain oversight and management, and enabling the prediction of future impacts and the formulation of appropriate responses.
Data-intensive technologies may also play a role in supporting processes carried out in fulfilment of corporate responsibilities and obligations relating to human rights due diligence. Key among these is human rights impact assessment (HRIA), understood as a process for identifying, comprehending, evaluating, and addressing the adverse effects of business activities on rights-holders (e.g., workers, local communities, consumers).
At the same time, datafication can generate substantial challenges in this context, which largely remain underexplored. For example, this includes distinguishing appropriate data for use in identifying and assessing the potential adverse effects of business activities on rights-holders (e.g., interference with privacy and/or the failure to protect rights-holders from discrimination), methods of data collection and processing, and the need for formalised standards pertaining to the evaluation of this data. It also includes challenges related to the human rights impacts of data-driven technologies themselves: who might be impacted, which rights may be adversely affected, and how can such impacts be identified and assessed?
In grappling with these challenges, consideration must be given to the role, both actual and potential, of law in promoting the socially responsible use of data and data-intensive technologies by corporations. Among other things, it is necessary to examine how human rights law can be used (also in combination with other areas of law, such as data protection law and AI law) to steer corporate behavior. The EU offers an interesting case study in this regard. In addition to its General Data Protection Regulation, the EU is currently in the process of developing a Corporate Sustainability Due Diligence Directive, which aims to anchor human rights and environmental considerations in companies’ operations and corporate governance, and an AI Act, which aims to achieve a balance between empowering corporations to deliver upon the potential socio-economic benefits of AI and ensuring the protection of EU citizens’ fundamental rights.
Beyond the context of the EU, human rights law enters the picture in the form of international standards of corporate conduct (e.g., the UN Guiding Principles on Business and Human Rights) and bottom-up initiatives aimed at encouraging their uptake and facilitating their implementation (e.g., the Danish Human Rights Institute’s HRIA Guidance and Toolbox). Moving forward, it can be argued that a more decisive and concerted legal response would allow corporations to harness the advantages of modern data-driven techniques at the same time as it would provide explicit and defined safeguards geared towards compliance with human rights law.
Against this background, this PhD project should explore how law can be used to address the legal, regulatory, and practical challenges that stem from the expanding use of data-intensive technologies in corporate decision-making processes, with an emphasis on processes carried out in fulfilment of corporate responsibilities and obligations relating to human rights due diligence. In developing your research proposal, you may want to consider questions such as:
- Which human rights are likely to be impacted by corporate activities supported by data-intensive technologies? How are human rights impacted (positively and/or negatively) by data-intensive technologies used in corporate decision making? How can impacts be identified, measured, and assessed? Who are the 'rights holders' to be protected?
- What kinds of data might be used to assess the relevant impacts as part of corporate due diligence? Who are the likely subjects and holders of this data? What are the likely modes of data collection and processing?
- How are data-intensive technologies used in HRIA? How are data-intensive technologies used in other forms of legally mandated impact assessments beyond HRIA (e.g., environmental impact assessment, data protection impact assessment, equalities impact assessment)? What lessons can be learned from relevant laws and practice? What opportunities and challenges are presented by integrated approaches to impact assessment?
- Does EU law provide an effective framework for ensuring the responsible use of data-intensive technologies in corporate decision-making? What about jurisdictions beyond the EU?
- What is the capacity of the UN Guiding Principles on Business and Human Rights and related tools to support socially responsible use of data-driven technologies in corporate decision making? To what extent do, and can, these standards and tools align with other legal frameworks which regulate the use of data-driven technologies?
The successful candidate will work closely with our team to develop their research focus in a way that is tailored to their own skills and interests, but proposals should fit with the broad aims of the project.
This project is hosted within the newly launched Strathclyde Centre for Doctoral Training (CDT) in Human Rights-based Decision Making. The PhD projects affiliated with this CDT should enhance understanding of the complex challenges and opportunities related to human rights-based decision making by a range of actors/institutions in the public, private, and third sector.
Key words: Law, data, human rights, machine learning, corporate governance, regulation, algorithm, data protection.