Accountability and remedies

Ensuring access to justice for individuals and groups

Where individuals and groups have a credible claim that their human rights have been adversely affected by the use of big data and emerging technologies, it is critical that they can bring a complaint against those they think are responsible. Our research examines the availability, adequacy and effectiveness of existing complaints processes within state institutions as well as independent regulatory bodies such as ombuds, information commissioners and national human rights and equality bodies. This is in addition to our research and policy work on the complaints processes within technology companies and businesses that use technology as part of their decision-making processes or service provision.

Known and unknown harm

One of the central challenges for individuals and groups in accessing justice is that they may not know that their human rights have been put at risk at all, unless the state or business concerned tells them. This could arise, for example, where data is improperly shared or sold to another business. In other circumstances, individuals may not know that technology was used to make or inform a decision about them and so have no opportunity to challenge that use. For example, a person may have a loan application rejected or may not be called to interview. However, they may not know if an algorithm was used to assess the creditworthiness of the person in the case of the loan application or to shortlist particular candidates in the recruitment round. As algorithms can often use discriminatory data and produce discriminatory outcomes, the person may have grounds for challenging the decision – but only if they know that an algorithm was being used. Our research looks at whether states and businesses should be required to be transparent about where they use technology and to notify people where they identify potential harm to address these problems.

Our research

Our research focuses on the availability, adequacy and effectiveness in grievance mechanisms in the information and communication technology (ICT) sector including social media companies and companies that use automation and AI within their product or services. The objective of the research is to understand how these grievance mechanisms are used and to consider what other avenues can be used to access remedies for the human rights harm caused by the use of big data and AI by companies. We aim to better understand the ways in which new technologies are being used as part of internal and external grievance and complaint mechanisms. Once we understand remedial measures currently available we can we begin the investigation of the adequacy of these measures and assess whether such measures are effective given the nature of harm caused.

Alongside this research, we look at the complaints processes offered by regulatory bodies such as ombuds, national human rights institutions and equality bodies and information commissioners and whether they can be adapted to address the human rights implications of big data and emerging technologies or whether new processes are needed.

 

Our Partners

Queen Mary University of London
University of Cambridge
Eye Witness Media
Universal Rights Group
World Health Organisation
Geneva Academy