Please join the Human Rights, Big Data and Technology Project for a panel discussion on
Automated Facial Recognition and Human Rights
10 September 2019
A drinks and canapés reception follows the panel discussion
from 7.45 pm until 8.30 pm
The Royal Society, London
This event is free and open to the public. However as places are limited, registration is essential
The growing prominence of facial matching technology has become increasingly controversial in recent years. While original public uses of this technology can be traced back over decades, technological advances in the form of image analysis and algorithmic processing have generated a step change in the capability of facial recognition. At the same time, uses of this technology have continued to diversify with common applications including identity verification at borders, scanning video footage of crime events, and real-time monitoring of crowds to identify individuals listed on ‘watchlists’.
It is this latter application, ‘live facial recognition’, that has attracted particular public attention. Live facial recognition technology has, for example, been deployed at public events by police in London and South Wales, and in a wide variety of contexts by police in the United States. However, the significant surveillance capacity of live facial recognition technology – demonstrated by China’s use of the technology to monitor and repress the Uighur population – has given rise to a number of concerns, particularly as the human rights impacts of its use are unclear. Among others, Mayor of London, Sadiq Khan and UN Special Rapporteur for the Right to Privacy have voiced concerns, the House of Commons Science and Technology Committee have requested a moratorium, and civil society groups Liberty and Big Brother Watch have called for an outright ban on its use.
This public event brings together leading experts on the use of facial recognition, including those with direct and in-depth experience of its use in the UK and the US. It will discuss operational aspects of the technology and how this may impact police performance, examine the human rights implications, and consider if and how this technology can be regulated.
Hosted by the ESRC Human Rights, Big Data and Technology project at the Royal Society, London.
Clare Garvie is an attorney and senior associate with the Center on Privacy & Technology at Georgetown Law. She was a co-author and the lead researcher on The Perpetual Line-Up: Unregulated Police Face Recognition in America, a report that examines the widespread use of face recognition systems by state and local police and the privacy, civil rights, and civil liberties consequences of this new technology, as well as two follow-up reports. Her current research focuses on the use of face recognition-derived evidence in criminal cases, and she serves as an informational resource to public defenders, advocates, and journalists. She received her J.D. from Georgetown Law and her B.A. from Barnard College in political science, human rights, and psychology. Previously, she worked in human rights and international criminal law with the International Center for Transitional Justice (ICTJ).
Pete Fussey is a Director of the Centre for Research into Information, Surveillance and Privacy (CRISP), and Research Director for the Human Rights, Big data and Technology Project. His main research interests focus on surveillance, digital sociology, human rights, intelligence oversight, control and the city and has published widely across these areas. Professor Fussey has also worked with and advised national and regional governments in the UK and Europe on a number of issues including the regulation of surveillance, public order policing and the security and social implications of urban mega-events. Most recently he has conducted research with several national surveillance oversight bodies including the Investigatory Powers Commissioners Office and he also leads the ‘human rights and technology’ strand of the UK’s national Surveillance Camera Strategy. Professor Fussey is the lead author of the independent report into the London Metropolitan Police trials of facial recognition technology.
Dr Daragh Murray is a Senior Lecturer at the Human Rights Centre & School of Law. His research focuses on issues relating to conflict and counter-terrorism, as regulated by the law of armed conflict and international human rights law. He has a particular interest in the use of technology, particularly in law enforcement and intelligence contexts, and in understanding how human rights can be applied to emerging tech. Daragh is Deputy Lead of the surveillance work stream within the Human Rights, Big Data & Technology project, Director of the Human Rights Centre Clinic’s Digital Verification Unit, and a members of the Open Source Research for Rights Project.
Reema Patel is Head of Engagement at the Ada Lovelace Institute, an organisation seeking to ensure that data & AI work for people and society. She is an experienced policy professional who has led various citizen engagement and participation initiatives on complex and controversial policy areas in the UK, including the Royal Society of Arts (RSA) Citizens’ Economic Council, which successfully worked with and influenced the Bank of England’s public engagement strategy. Reema has consulted for a variety of international organisations, including the Danish Board of Technology Foundation and Nextdoor.com, San Francisco based technology social media start-up. She is a fellow of the RSA, founding trustee of a community run library, and a local councillor.
Follow us on Twitter @HrbdtNews This event #FacialRec19