Investigating how facial recognition technology is used and its human rights implications
Facial recognition technology is typically deployed to secure rights, in particular by protecting against threats to public order and the rule of law. However, they may also threaten rights such as the rights to privacy, expression, association and assembly.
What is facial recognition technology?
Facial recognition technology is an artificial intelligence tool used to identify individuals within a digital image – such as a video or photo – often in real time. It has been used by police to identify possible suspects or to catch jaywalkers. It also used to provide travel advice in airports, and to unlock smartphones or computers.
However, its use is becoming increasingly pervasive. Just think about how your smartphone identifies your friends in your photo library.
Facial recognition and human rights
There are significant human rights concerns associated with the technology, and this is increasingly being recognized by the public. Indeed, in 2018 Microsoft’s chief legal officer called for regulation in a blog post saying:
“Imagine a government tracking everywhere you walked over the past month without your permission or knowledgeImagine a database of everyone who attended a political rally that constitutes the very essence of free speech.” – Facial recognition technology: The need for public regulation and corporate responsibility
Because facial recognition technology is so new, and because it can be deployed by so many actors, across so many different contexts, urgent research into the human rights implications is required. A key danger is that facial recognition technology can be used to create a record of our movements, and this record can then be analysed to create a detailed profile of our lives. It could reveal personal information such as who we meet with, and how often; where we work; where we socialise; if we are members of any groups or organisations; or whether we go to protests. This can be used not only to reveal personal information (such as sexual orientation or health information) but also our political opinions. The potential impact on the balance of power between the State and its citizens is significant.
Right to a private life
Article 17 of the International Covenant on Civil and Political Rights codifies the right to privacy. Live facial recognition technology interferes with this right, as it involves the processing of images for the purpose of determining an individual’s identity. The use of live facial recognition technology by public authorities will typically give rise to two separate interferences with the right to privacy. Firstly relating to the initial processing of the image (i.e. image analysis using facial recognition technology). Secondly to any subsequent retention or analysis of the data.
Prohibition of discrimination
The prohibition of discrimination is codified in Article 2 of the International Covenant of Civil and Political Rights, and Article 2 of the International Covenant on Economic, Social and Cultural Rights. This protects individuals and their rights from discrimination of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.
A number of concerns exist regarding bias associated with facial recognition technology, whether as a result of difficulties in conducting biometric matching or as a result of inadequate training data. The risk is that this may translate into prohibited forms of discrimination depending on how facial recognition technology is used.
The use of facial recognition technology may also bring into play other rights, such as the rights to freedom of expression, association, and assembly, or the right to liberty and security. Indeed, the potential uses of facial recognition technology are so broad that a number of rights may be affected. This is why research is necessary.
Members of the HRBDT project are currently engaged in extensive research into how facial recognition technology is used, and its human rights implications. For instance, Prof. Pete Fussey and Dr. Daragh Murray recently conducted an independent review of the London Metropolitan Police’s trial of facial recognition technology.
HRBDT Public Panel Discussion: “Facial Recognition and Human Rights” 10 September 2019Read more
On 10 September the Human Rights, Big Data and Technology Project hosted a panel discussion at The Royal Society on…
BBC Radio 4 Interview – Facial RecognitionRead more
Dr Daragh Murray was recently interviewed on BBC Radio 4’s PM programme. Daragh highlighted the human rights and public interest considerations…
HRBDT Researchers Launch New Report on London Metropolitan Police’s Trial of Live Facial Recognition TechnologyRead more
A new report by researchers from the Human Rights, Big Data & Technology Project, based at the University of Essex…
- Listen to @EAbrusci on @BBCEssex talking about misinformation, disinformation, fake news and human rights. The fu… https://twitter.com/i/web/status/1202917399093231621
- Thank you to everyone who attending our first public event in Colchester - Your Data. Why Should You Care? Thought… https://twitter.com/i/web/status/1201951235013795841
- Clicking Those ‘I agree’ Buttons – The Challenges and Implications of Giving Consent Online. Thought provoking pres… https://twitter.com/i/web/status/1201942930652483584
- Next up Amy Dickens will be discussing the involvement of technology companies in the NHS. Is this making us healt… https://twitter.com/i/web/status/1201935380645154816
- The General Election has bought a new focus on “fake news” but its impact goes far beyond democracy, threatening ma… https://twitter.com/i/web/status/1201933115553284096
- Today at 1630 listen to @EAbrusci on the @DaveMonkShow @BBCEssex talking about #fakenews, the history of dis/misin… https://twitter.com/i/web/status/1201897631645089793