Developing a comprehensive approach for algorithmic accountability to protect human rights.
Modern algorithms are replacing human decision making. Algorithms conduct sophisticated predictive analytics and execute complex tasks beyond human capability and speed. Their application is used to automate many functions traditionally carried out by humans and has expanded to key areas of decision-making. This includes algorithmic assessments in applications such as sentencing decisions, credit scoring, recruitment, and social security. The use of algorithmic systems to make or support decisions is becoming increasingly central to many areas of public and private life. This can affect all our human rights, from civil, cultural, economic, political, to social rights.
The pace of technological innovation is faster than the formulation, application and enforcement of governance and regulation of algorithms in decision-making. Some commentators have suggested that governance and regulatory mechanisms are anti-innovation and that it is too late or too difficult to manage this area of technological innovation. This exceptionalises new technologies such as algorithmic and artificial intelligence systems. It is not a valid argument for failing to govern the development and use of algorithmic systems. Human rights are universally applicable. New technologies can impact on human rights like any other sector and the potential benefits for such systems do not discount the need for ensuring human rights are respected and protected. International human rights law applies to the use of new technologies just as it applies in any other area of life.
The human rights-based approach to algorithmic accountability
States and businesses engaged in any part of the algorithmic life cycle, from the design, development and deployment to the supply of algorithmic systems, should embed a human rights-based approach.
International human rights law provides a means to define and assess harm, and provides a deeper and fuller means of analysing the overall effect of the use of algorithms. The specific obligations on States and expectations on businesses to prevent and protect human rights includes prescription of the mechanisms and processes required for implementation. The international human rights law framework can map on to the algorithmic life cycle and offers a holistic approach for accountability.
Existing mechanisms for algorithmic accountability such as data protection, impact assessments and compliance checks may have some relevance for protecting human rights and preventing violations. The international human rights law framework complements these frameworks and contributes to a more comprehensive approach for algorithmic accountability, incorporating robust safeguards and assessing the full scope of impact.
The next steps for our research are to operationalise this framework in practical guidance for states and businesses.
HRBDT Public Panel Discussion: “Algorithmic Accountability and Human Rights” 26 March 2019Read more
On 26 March the Human Rights, Big Data and Technology Project hosted a panel discussion at The Royal Society on…
HRBDT at 2018 Freedom Online Conference, BerlinRead more
On 29 November 2018, Lorna McGregor attended the 2018 Freedom Online Conference in Berlin, organised by the Freedom Online Coalition. …
HRBDT provides evidence to The Law Society’s Public Policy Technology and Law CommissionRead more
On 25 July 2018, Lorna McGregor gave evidence to The Law Society’s Public Policy Technology and Law Commission. The Law…
RightsCon 2018 – Mobilising the Might of Rights: A Human Rights Based Approach to AIRead more
Artificial intelligence’s (AI) impact on society demands scrutiny of the ways in which it is designed and deployed. Debates have…
HRBDT Conference – Human Rights in a Digital AgeRead more
On 24 May 2018, The Human Rights, Big Data and Technology (HRBDT) Project, based at the Human Rights Centre at…
- Here's the #HRBDT weekly news round up of tech #humanrights stories including #lawenforcement #election2019 targett… https://twitter.com/i/web/status/1193949729983488002
- Join us for a thought provoking #HRBDT talk ‘Your Data – Why Should You Care?’ @firstsite Colchester on 3 December.… https://twitter.com/i/web/status/1193872045957943296
- Which opportunities does #AI offer for #HumanRights advancement in the Global South? Which threats does it pose in… https://twitter.com/i/web/status/1192330730573049856
- #AI, #HumanRights and #media development. Check out the programme of the Fome Symposium 2019 (Bonn, 7-8 Nov) and th… https://twitter.com/i/web/status/1192197351055732736
- Here's the HRBDT roundup of #HumanRights tech news including #election2019 political ads and #misinformation,… https://twitter.com/i/web/status/1191774098881482755
- So, we have what you (might) have been waiting for! #RightsCast a #HumanRights focused podcast from @EssexHRC. Topi… https://twitter.com/i/web/status/1191705560879706115