Locating Human Rights in the Critical Analysis of Algorithmic Decision-Making: A Brief Commentary on the Science and Technology Committee Report on Algorithms in Decision-Making

Locating Human Rights in the Critical Analysis of Algorithmic Decision-Making: A Brief Commentary on the Science and Technology Committee Report on Algorithms in Decision-Making
May 30, 2018 Vivian Ng

The House of Commons Science and Technology Committee recently released the Fourth Report of Session 2017-2019 on ‘Algorithms in decision-making’. The release of the Committee’s findings and recommendations for the government is particularly timely, following recent revelations regarding Cambridge Analytica and Facebook and the increasing recognition that these issues extend far wider. This post will unpack how human rights have featured in the Committee’s analysis, and argues that human rights should underpin and center the understanding of how algorithms affect individuals and groups in society, as well as the responses to address the risks and challenges.

Scope of the report

The Committee has identified critical challenges that have to be confronted given the increasing use of algorithms in various sectors, and presented a set of thoughtful recommendations important for the government’s agenda. The report includes analysis of how bias and other risks can arise when data-driven algorithms are used to support decision-making, the need for monitoring and oversight, mechanisms for accountability and transparency, and the regulatory landscape and the important role of the new Centre for Data Ethics & Innovation.

The impact of data-driven algorithms

The Committee elaborated on the potential impact that data sharing has on the healthcare, criminal justice and social media sectors, and its concerns regarding bias and discrimination and other risks that have unacceptable impacts on individuals. The Committee rightly observes that any significant adverse impacts on individuals should be considered with the potential benefits, and identified various important opportunities and risks. These clearly implicate a range of human rights and articulating them as such expressly is critical as it gives a fuller understanding of how data-driven algorithms affect individuals and society.

The breadth and depth of the universal international standards in the human rights framework provide a valuable and fundamental baseline for benchmarking the effects of data-driven algorithms. The Committee highlighted the benefits of sharing health and care data for medical diagnosis, monitoring, and patient care. This clearly connects to the right to the enjoyment of the highest attainable standard of physical and mental health. The rigorous content of specific rights in the human rights framework offers a more robust and meaningful evaluation of these effects. The right to health has been established to be indispensable for the exercise of other human rights, and interpretation of the right by the United Nations Committee on Economic, Social and Cultural Rights provides standards on how states should fulfil their obligations of ensuring the realisation of the right. The essential components of availability, accessibility, acceptability, and quality should guide how data-driven algorithms are used in healthcare, to ensure that steps taken to digitise the NHS and use data and algorithms are in compliance with these standards. The recommendation for a national framework of conditions to govern how commercial value is extracted from NHS data should draw on the existing conditions prescribed in human rights, that go beyond profit and risk assessments.

The Committee focuses heavily on the issue of bias and discrimination in its report. We welcome that the Committee has noted that discrimination can “enter the decision-making process from a variety of paths” and can present itself at any stage of the algorithmic life cycle, and subsequent deployment of such algorithms can amplify the discriminatory effects, drawn from the written evidence the Human Rights, Big Data and Technology Project contributed to the inquiry. To take the analysis of the Committee further, the full set of human rights allows a fuller appreciation of the range of effects that the use of algorithms in decision-making can produce.

The Committee noted concerns about potential racial bias in the use of facial image recognition for criminal justice, and the validity of algorithmic data analysis as evidence. Relying on the human rights framework allows an appreciation of how discrimination arising in criminal justice context does not only affect one’s right to equality and non-discrimination, but how it implicates the right to liberty, and even the right to fair trial. All human rights are universal, indivisible and interdependent and interrelated. Focusing narrowly on discrimination misses the bigger picture of how individuals are adversely affected by the use of algorithms in decision-making when various rights are engaged simultaneously.

Locating the effects of algorithms in decision-making within the human rights framework is thus critical as it gives greater depth in measuring the potential benefits and harms, as well greater breadth in understanding the diverse range of impacts.

An effective response to the impact of algorithms in decision-making

The Committee has asserted that where the use of algorithms affects rights and liberties, explanation and transparency are crucial, and emphasized the importance of accountability. Yet, the Committee relies on principles and codes limited to ethics, and audits and mechanisms limited to data protection, throughout its approach to developing responses for confronting the effects of algorithms in decision-making. Human rights do not only offer a framework for understanding the potential impact of algorithms in decision-making, it contains an established framework with safeguards for preventing violations, monitoring and oversight mechanisms, and remedies for violations. Highlighting that where rights are affected, there must be a response is an important first step, but the response can and should be located within the human rights framework itself.

The Committee has stuck closely to its mandate and provided a set of recommendations for the government’s consideration and action regarding the use of algorithms in decision-making. Here, the human rights framework offers a fuller account for the scope of states’ responsibilities. As duty-bearers of human rights, states are not only required to respect human rights, their obligations also include the protection against human rights abuses, and the fulfilment of human rights by taking positive action to facilitate the enjoyment of basic human rights. With that view, the Committee has established an important first layer of actions for the state’s agenda. It should be accompanied by how the uses of algorithms in decision-making by the state itself can be independently reviewed, and how it should develop the regulatory framework to govern companies within this context.


Disclaimer: The views expressed herein are the author(s) alone