About us

Housed at Essex University’s Human Rights Centre with partners worldwide, the Human Rights, Big Data and Technology Project considers the challenges and opportunities presented by big data and associated technology from a human rights perspective.

What is the project about?

The digital age has brought about a global pattern shift in how we communicate, interact and organise our world. Everyday life generates colossal quantities of digital data. This data can be scrutinsed by complex forms of analysis using algorithms, artificial intelligence and other digital tools. This in turn yields highly personalised insights about ourselves, our habits, desires and role in society. Our project explores the challenges and opportunities that big data and artificial intelligence are bringing to human rights. We are researching whether fundamental human rights concepts and approaches need to be adapted in the era of technological advancement and big data. We plan to develop good practice guidelines, regulatory responses and solutions for the human rights sector to improve both enjoyment and protection in the digital age. Our key engagements fall mainly but not entirely into the categories: businesses, international human rights and humanitarian institutions and non-governmental human rights.

The main questions we will answer are:

  • What human rights concerns and opportunities are generated by digital technology including those using ‘big data’, artificial intelligence and algorithmic decision-making?
  • Can the international human rights framework and institutions respond to deal with the challenges and opportunities presented?
  • How do we effectively regulate the collection, storage, use, amalgamation, re-purposing and sharing of data by States and non-State actors?
  • What remedies are needed and how can these be effectively developed and implemented?

How do we plan to do this?

We have an international and multidisciplinary team of professionals specialising in computer science, criminology, economy, law, philosophy, political science and sociology. This is allowing us to research the predominant areas that affect the relationship between human rights, big data and artificial intelligence.

We are focusing on four research themes:

Rights implications, regulations and remedies

In this theme we identify and analyse the positive and negative human rights implications of big data, AI and smart technologies. We then look at the existing and developing legal responses to these human rights implications to detect legal and policy gaps in the protection of human rights. From these results, we determine whether and to what extent reform is necessary.

We focus on algorithmic accountability and the risks of potential discrimination in this method of decision making, focusing on transparency, monitoring and accountability at every stage of the development process.

Health and Human Rights

Here, we look at public-private partnerships in healthcare and how big data and statistical analysis can be used to measure the progression of health rights and improve their practical delivery. Second, we consider how big data, AI and smart technologies help enhance accountability of duty-bearers (those with a responsibility to respect, promote and realise human rights and to abstain from human rights violations) in relation to health rights.

Surveillance and Human Rights

Digital innovation has driven the development of ever more potent surveillance tools. Potential now exists to digitally identify faces in a crowd, trace someone’s movements through a city and mine ever-increasing quantities of personal data. Attempts are also made to use digital tools to predict where an offence may occur and who may be responsible. Once understood through the lens of privacy concerns, these new surveillance capabilities bring a range of rights-based considerations into play.

We analyse law enforcement and national security uses of such technologies in the UK, US, India, Brazil and Germany. In doing so we not only assess the extent to which existing human rights protections require rethinking but, through our work on regulation and oversight, advance meaningful ways in which these principles can be put into practice.

Advancing Human Rights and humanitarian responses

Humanitarian crises worldwide are at the centre of this theme, with us analysing how international organisations use big data to provide both protection and solutions in the short and long-term for those affected by conflict or displacement.

We investigate techniques for identifying, modelling and using contextual information to pick up potential violations of human rights that may have been manifested in social media. Then, look closely at techniques to advance humanitarian responses through the analysis of visual imagery and text and image-based computer science to monitor human rights concerns and identify and mould relevant information from the existing social media content.

What is Big Data and why is it threatening Human Rights?

Big data is an evolving term covering a vast area of data that can be collected, analysed and monetised. It is developed, for instance, through search engines, internet history, social media, voice messages, ECG scans, and barcode scans. Super computers and algorithms allow us to make sense of these increasingly large pools of information in real time.

It’s a threat to human rights in many forms. For example, decisions that were once based on experience and history, such as employment and promotions, will be made through machine analysis of massive amounts of data. This takes out the personal factor and looks at people on a macroeconomic scale rather than focusing on individual qualities. Privacy is also at risk; with such giant sets of data available it means people are having information shared unprecedentedly that they would rather keep private.

However, big data also provides opportunities for the enhancement of human rights protection. Along with analytics, it can assist, for example, in the identification of otherwise invisible forms of vulnerability and discrimination, and can encourage efficient and effective resource deployment, which subsequently helps work towards better human rights realisation.

What is artificial intelligence?

Artificial intelligence is used to process big data. It is an area of computer science that creates the intelligent machines that are programmed to work and act like humans which for example, involves them solving problems and improving themselves. It’s developing every day; we’ve seen self-driving cars and screenplay written by AI, although both areas still leave a lot to be improved.

There are two types of AI in terms of approach, and we’re currently working at the weak level. This is a system that behaves like a human but does not give us an insight into how the brain works, focusing on the information it has processed but not showing creativity. Strong AI is building systems that think, process and give us an insight into how the brain works, although we are far from working at this level currently.

Who are the team?

The project is housed at Essex University’s Human Rights Centre, with multidisciplinary partners worldwide specialising in computer science, sociology, economy, law, philosophy, political science and sociology. Lorna McGregor is Director of the Project and Ahmed Shaheed and Pete Fussey are Co-directors. To read more about our team, click here.

How long will the project run for?

Beginning in October 2015, the project will run until September 2020. It is funded by a £5 million grant from the Economic and Social Research Council and £1 million from the University of Essex.

Our Partners

Queen Mary University of London
University of Cambridge
Eye Witness Media
Universal Rights Group
World Health Organisation
Geneva Academy