Algorithmic accountability

Developing a comprehensive approach for algorithmic accountability to protect human rights.

Modern algorithms are replacing human decision making. Algorithms  conduct sophisticated predictive analytics and execute complex tasks beyond human capability and speed. Their application is used to automate many functions traditionally carried out by humans and has expanded to key areas of decision-making. This includes algorithmic assessments in applications such as sentencing decisions, credit scoring, recruitment, and social security. The use of algorithmic systems to make or support decisions is  becoming increasingly central to many areas of public and private life. This can affect all our human rights, from civil, cultural, economic, political, to social rights.

Algorithmic accountability

The pace of technological innovation is faster than the formulation, application and enforcement of governance and regulation of algorithms in decision-making. Some commentators have suggested that governance and regulatory mechanisms are anti-innovation and that it is too late or too difficult to manage this area of technological innovation. This exceptionalises new technologies such as algorithmic and artificial intelligence systems. It is not a valid argument for failing to govern the development and use of algorithmic systems. Human rights are universally applicable. New technologies can impact on human rights like any other sector and the potential benefits for such systems do not discount the need for ensuring human rights are respected and protected. International human rights law applies to the use of new technologies just as it applies in any other area of life.

The human rights-based approach to algorithmic accountability

States and businesses engaged in any part of the algorithmic life cycle, from the design, development and deployment to the supply of algorithmic systems, should embed a human rights-based approach.

International human rights law provides a means to define and assess harm, and provides a deeper and fuller means of analysing the overall effect of the use of algorithms. The specific obligations on States and expectations on businesses to prevent and protect human rights includes prescription of the mechanisms and processes required for implementation. The international human rights law framework can map on to the algorithmic life cycle and offers a holistic approach for accountability.

Existing mechanisms for algorithmic accountability such as data protection, impact assessments and compliance checks may have some relevance for protecting human rights and preventing violations. The international human rights law framework complements these frameworks and contributes to a more comprehensive approach for algorithmic accountability, incorporating robust safeguards and assessing the full scope of impact.

Our research

 The next steps for our  research are to operationalise this framework in practical guidance for states and businesses.

Latest Posts

  • Jul092018

    Why We Need to Stop Talking About ‘Killer Robots’ and Address the AI Backlash

    This post originally appeared on EJIL: Talk! In the field of artificial intelligence, the spectacle of the ‘killer robot’ looms…

    Read more
    HRBDT
  • Jun032018

    HRBDT at the Data Justice Conference 2018

    Lorna McGregor and Vivian Ng participated in the Data Justice Conference 2018 hosted by the Data Justice Lab at Cardiff…

    Read more
    HRBDT
  • May212018

    Four ways your Google searches and social media affect your opportunities in life

    By Lorna McGregor, Daragh Murray and Vivian Ng Originally published in The Conversation on 21 May 2018. Whether or not you realise…

    Read more
    HRBDT
  • Dec012017

    UN Forum on Business and Human Rights 2017 – Addressing Access to Remedy in the Digital Age: Corporate Misconduct in Sharing and Processing Personal Data

    2017 United Nations Forum on Business and Human Rights  Addressing Access To Remedy In The Digital Age: Corporate Misconduct In…

    Read more
    HRBDT
  • Nov012017

    Artificial Intelligence, Big Data and the Rule of Law

    On 9 October 2017, Lorna McGregor participated in a debate on ‘Artificial Intelligence, Big Data and the Rule of Law’,…

    Read more
    HRBDT
  • Oct162017

    Algorithmic Decision Making and its Human Rights Implications

    On 12 – 13 October 2017, Lorna McGregor participated in a workshop within the Global Network of Internet and Society…

    Read more
    HRBDT

Researchers

Publications

Publications

    Our Partners

    Queen Mary University of London
    University of Cambridge
    Eye Witness Media
    Universal Rights Group
    World Health Organisation
    Geneva Academy