Identifying and assessing the risks and opportunities for human rights posed by artificial intelligence
The core objective of HRBDT is to identify and assess the risks and opportunities for human rights posed by artificial intelligence and to propose solutions to ensure that new and emerging technologies are designed, developed, deployed and regulated in a way that is enabling of, rather than threatening to, human rights.
What is artificial intelligence?
Artificial Intelligence (AI) refers to systems that are programmed to work and act like humans. It involves them solving problems and improving themselves. AI is developing every day. It is expected that they will eventually be able to mimic and perform the same tasks as a human would.
Artificial intelligence and human rights?
The use of these technologies can affect a range of sectors and areas of life, such as education, work, social care, health and law enforcement. There are several ways AI could offer significant opportunities for the advancement of human rights across many areas of life. For example, by facilitating more personalised education and assisting people in later life to live a dignified life at home. But there are also several issues that need to be considered and AI has the potential to undermine or violate human rights protections.
The use of big data and AI can also threaten the right to equality, the prohibition of discrimination and the right to privacy. These rights can act as gatekeepers for the enjoyment of other fundamental rights and personal and political freedom.
Watch our video below to find out how AI can affect human rights.
Our research questions
How are human rights affected by the use of big data and AI?
How can the human rights framework contribute to the governance and regulation of AI?
How can individuals and groups access remedies where their rights are affected?
How are we doing this?
Analysis of the adequacy of international human rights law and its institutions to deal with the challenges posed by big data and AI.
Analysis of how to incorporate a human rights-based approach into AI governance and regulation within states, corporations and at the international level.
- Here's this week's @HRBDTNews round up of the latest tech #Humanrights news stories covering #algorithms… https://twitter.com/i/web/status/1244622626351431681
- See the latest @HRBDTNews round up covering #Algorithms #coronavirus #FakeNews #socialmedia and more ... https://www.hrbdt.ac.uk/weekly-news-circular-13-march-2020/
- FACIAL RECOGNITION – Where now for the UK? @PeteFussey @essexsociology + @Daragh_Murray @EssexLawSchool @EssexHRC… https://twitter.com/i/web/status/1238116541499047936
- FACIAL RECOGNITION – Is the tech biased? In our seventh film, @ClareAngelyn @GeorgetownCPT @GeorgetownLaw looks at… https://twitter.com/i/web/status/1237418421953925122
- Here's this week's @HRBDTNews round up of the latest tech #Humanrights news stories including #Algorithms… https://twitter.com/i/web/status/1237322589074345984
- FACIAL RECOGNITION – The problems with watchlists @PeteFussey @essexsociology + @Daragh_Murray @EssexLawSchool… https://twitter.com/i/web/status/1235972564515794945