Investigating how facial recognition technology is used and its human rights implications
Facial recognition technology is typically deployed to secure rights, in particular by protecting against threats to public order and the rule of law. However, they may also threaten rights such as the rights to privacy, expression, association and assembly.
What is facial recognition technology?
Facial recognition technology is an artificial intelligence tool used to identify individuals within a digital image – such as a video or photo – often in real time. It has been used by police to identify possible suspects or to catch jaywalkers. It also used to provide travel advice in airports, and to unlock smartphones or computers.
However, its use is becoming increasingly pervasive. Just think about how your smartphone identifies your friends in your photo library.
Facial recognition and human rights
There are significant human rights concerns associated with the technology, and this is increasingly being recognized by the public. Indeed, in 2018 Microsoft’s chief legal officer called for regulation in a blog post saying:
“Imagine a government tracking everywhere you walked over the past month without your permission or knowledgeImagine a database of everyone who attended a political rally that constitutes the very essence of free speech.” – Facial recognition technology: The need for public regulation and corporate responsibility
Because facial recognition technology is so new, and because it can be deployed by so many actors, across so many different contexts, urgent research into the human rights implications is required. A key danger is that facial recognition technology can be used to create a record of our movements, and this record can then be analysed to create a detailed profile of our lives. It could reveal personal information such as who we meet with, and how often; where we work; where we socialise; if we are members of any groups or organisations; or whether we go to protests. This can be used not only to reveal personal information (such as sexual orientation or health information) but also our political opinions. The potential impact on the balance of power between the State and its citizens is significant.
Right to a private life
Article 17 of the International Covenant on Civil and Political Rights codifies the right to privacy. Live facial recognition technology interferes with this right, as it involves the processing of images for the purpose of determining an individual’s identity. The use of live facial recognition technology by public authorities will typically give rise to two separate interferences with the right to privacy. Firstly relating to the initial processing of the image (i.e. image analysis using facial recognition technology). Secondly to any subsequent retention or analysis of the data.
Prohibition of discrimination
The prohibition of discrimination is codified in Article 2 of the International Covenant of Civil and Political Rights, and Article 2 of the International Covenant on Economic, Social and Cultural Rights. This protects individuals and their rights from discrimination of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.
A number of concerns exist regarding bias associated with facial recognition technology, whether as a result of difficulties in conducting biometric matching or as a result of inadequate training data. The risk is that this may translate into prohibited forms of discrimination depending on how facial recognition technology is used.
The use of facial recognition technology may also bring into play other rights, such as the rights to freedom of expression, association, and assembly, or the right to liberty and security. Indeed, the potential uses of facial recognition technology are so broad that a number of rights may be affected. This is why research is necessary.
Members of the HRBDT project are currently engaged in extensive research into how facial recognition technology is used, and its human rights implications. For instance, Prof. Pete Fussey and Dr. Daragh Murray recently conducted an independent review of the London Metropolitan Police’s trial of facial recognition technology.
Essex Expertise Informs Facial Recognition DecisionRead more
The expertise and leading-edge research of three Essex academics has informed a landmark judgment on police use of facial recognition.…
When an Algorithm Gets It Wrong – Pete Fussey Interviewed for Police and Facial Recognition Podcast by MIT Technology ReviewRead more
Project Co-Director Prof Pete Fussey was recently interviewed for ‘When an Algorithm Gets It Wrong’, the first in a four-part…
Right On – The Wednesday Web ChatRead more
The HRBDT project is supporting a new digital initiative that aims to keep the Human Rights dialogue going during the…
- Have you seen our animation which explores some of the #humanrights issues posed by #AI. Do we know what data is be… https://twitter.com/i/web/status/1308017533337964546
- Read our news summary which celebrates the project's research, activities and publications from the last year. Avai… https://twitter.com/i/web/status/1306901439680258049
- The HRBDT project is recruiting a Senior Research Officer. The post will focus on social + #humanrights implication… https://twitter.com/i/web/status/1305822693296680960
- Welcome to the first @HRBDTNews update for 2020, covering Project Members' latest research, publications and activi… https://twitter.com/i/web/status/1304351955519770624
- Here's the roundup of weekly #humanrights #tech stories from @HRBDTNews including #AI #Covid19UK #Algorithms and m… https://twitter.com/i/web/status/1303691915913723905
- #Biometrics #FacialRecognition @PeteFussey + @Daragh_Murray have contributed a case study, focusing on UK #police… https://twitter.com/i/web/status/1301156317973839873