RightsCon 2018 – Mobilising the Might of Rights: A Human Rights Based Approach to AI

RightsCon 2018 – Mobilising the Might of Rights: A Human Rights Based Approach to AI
August 3, 2018 Catherine Kent

Artificial intelligence’s (AI) impact on society demands scrutiny of the ways in which it is designed and deployed. Debates have ensued on how to ensure that AI benefits all in society and contributes to, rather than threatens, human rights. A human rights based approach to AI offers a holistic, universal and enforceable solution. On 17 May 2018, the Human Rights, Big Data and Technology Project hosted a session at RightsCon on the might of rights in the design and deployment of AI.

Speakers included Jean-Yves Art (Senior Director for Strategic Partnerships, Microsoft), Tara Denham (Director of the Democracy Unit, Global Affairs Canada), Ansgar Koene (Senior Research Fellow, University of Nottingham and Working Group Chair of IEEE Standard on Algorithm Bias Considerations), Vidushi Marda (Policy Advisor, Article 19) and Lorna McGregor (Principal Investigator and Co-Director, ESRC Human Rights, Big Data and Technology Project).

Speaker Interventions

Lorna McGregor opened the session with an overview of the opportunities offered, and challenges posed, by big data, smart technology and AI for human rights and the Project’s work in this regard. Lorna noted that, in the last year, momentum has been building to illustrate the potential harm to human rights by AI and big data as the fuel and product of AI. This includes frequently cited examples such as the use of predictive analytics in risk assessment models to support decisions on who gets bail, as well as a series of other examples from mortgage applications to care assistants to facial recognition technologies. Equally, the possible opportunities for human rights have been highlighted by the use of big data and AI to make responses to humanitarian crises and disasters more targeted and effective and to document human rights.

Lorna noted that the existing international human rights law framework should be part of the response to AI, and that it is capable of adapting to the challenges and new environment posed by AI. She highlighted the growing number of reports and momentum on ethical approaches to AI, many of which include foundational human rights principles (such as dignity), as well as specific human rights (such as privacy or non-discrimination). She outlined the aims of the session, namely to address the relationship of human rights to existing ethical approaches, to discuss the value-added of an explicit human rights-based approach and to explore strategies to ensure that the human rights framework gains traction in this space.

Ansgar Koene outlined his involvement in the development of IEEE standards on ethical considerations of autonomous and intelligent systems, which promote technology for humanity. These industry standards will provide clear guidelines for the development of algorithmic systems in contexts that increasingly have human rights implications, such as resource allocation and bail determinations. Within the IEEE standard on algorithmic bias (P7003), the Working Group is considering where bias originates from and how to reconcile international differences on core values. The standard aims to reflect international consensuses and human rights are a central foundation of this as one of the few clear internationally-agreed legal bases. Ansgar noted the key human rights issues of discrimination and agency in the context of algorithmic systems. In the case of the latter, data collection and amalgamation can have significant impacts on individuals that are not always immediately visible, which can result in a removal of agency.

Ansgar also drew a distinction between the individual and the group, noting that human rights apply to the individual, yet algorithmic, and especially machine learning, systems look at population-level statistics when being built and evaluated. If a system is deemed 95% correct according to the training data, it is considered to be a well-running system. However, in such a case, 5% of individuals would be subject to an incorrect outcome, which requires further reflection due to the universality of human rights.

Vidushi Marda set out the benefits of a human rights-based approach to AI, highlighting three main points. First, human rights provide a shared lexicon. Concepts such as discrimination, fairness and accountability can be subjective and lack a shared understanding, resulting in multiple definitions and approaches. The human rights framework provides a common understanding of rights and a shared language on the basis of which people can have discussions. Second, the human rights framework is grounded in international law. This provides safeguards and an enforcement mechanism for human rights. She noted that a common frustration with ethical approaches is that they lack such an enforcement mechanism. Third, the human rights framework is the most global set of principles that society currently has. This enables a more inclusive dialogue that does not only speak to certain parts of the world.

Finally, Vidushi noted that human rights are useful as guiding principles, as is the case in the IEEE’s Ethically Align Design. This is because human rights provide a minimum standard for considering the deployment of AI. Within the technical community, fairness, accountability and transparency are growing fields of research. A lot of the issues in this space are based on human rights principles. The normative content of human rights can provide concrete guidance to AI developers in thinking through the content of fairness, accountability and transparency, which is open to numerous interpretations.

Tara Denham outlined the approach of the Democracy Unit at Global Affairs Canada, which is considering the intersection of foreign policy, technology and human rights. Tara noted that human rights are a beneficial lens to apply to the conversation on AI, especially regarding international negotiations, due to the existing international human rights framework and definitions. By contrast, ethics are difficult to articulate at a global level and to use as a starting point for negotiations due to cultural differences between counties.

Tara raised the concern held by some that AI requires a high level of regulation. She highlighted the need to consider existing mechanisms that can be leveraged before looking to regulation. Related to this, she raised the fear held by some of negatively impacting innovation, stressing innovation’s link with, and the crucial role of, societal trust in AI. Key areas where trust is required include in how technologies are impacting people and how governments apply AI in decision-making on resource allocation. She questioned how to positively impact the trust relationship in order to positively impact innovation. Finally, Tara questioned how pre-existing societal biases are integrated and reflected in the data used and technology being developed, and how to positively influence this. In this regard, she also questioned how to better integrate and support the cross-pollination of information across different communities, from policymakers to developers.

Jean-Yves Art pointed out that while AI is delivering significant benefits to society and individuals across a large variety of sectors from healthcare to education to finance to agriculture to environmental protection (to name a few), AI also raises challenges, both from a technical viewpoint and a human rights perspective, and those challenges need to be addressed. To respond to this, Microsoft is doing two things. First, Microsoft recently commissioned an AI human rights impact assessment. This identified risks and causation, and offered recommendations on how to address these risks. Second, implementing the recommendations, Microsoft set up an AI and Ethics in Engineering and Research (Aether) Committee. The Committee comprises of senior leaders from various departments in Microsoft, including engineering, business, legal and human rights, and considers policy and product development. The Committee issues internal policy guidance on how to integrate human rights considerations into AI development. The Committee also considers product development, integrating human rights compliance and ensuring that products comply with ethical principles and human rights provisions.

Jean-Yves noted that key concepts concerning AI and human rights, such as equality, non-discrimination, inclusivity, security and privacy, are relatively clear. However, he stressed the need to implement these concepts within product development. In questioning how to do so, he used addressing the risk of discrimination as an example. He flagged the need to scrutinise the training data to identify possible sources of discrimination, to ensure a diverse team of engineers to do so, and to involve subject matter experts on AI applications to identify various factors that should be taken into consideration in product development.

Open Floor

Following speakers’ interventions, the discussion opened into a roundtable format, inviting comments and questions from participants. Key issues discussed included:

  • Geographical Provenance of Training Data

The issue of under-representative training data was raised, noting in particular the lack of data from the global South. In response, Vidushi Marda noted that it is not just an issue of under-representative data, but that data might not exist at all. She cautioned against increased surveillance devices as the solution to the lack of data, and questioned how to move from biased datasets to datasets that are fair and just. Ansgar Koene also stressed that this is not just a question of data; the context within which AI systems are being developed and deployed also needs to be considered. If the scope of usage goes beyond this, then developers need to question whether the system is still valid.

  • Development of Strategic Government Policy

A question on developing a strategic government policy on AI was raised. Tara Denham noted that this is a fast-moving area, with numerous governments undertaking similar initiatives. She outlined several of the key questions in developing policy in this area, including how to embed AI policy in a multilateral system and whether there are additional forums to the United Nations that should be considered. She also discussed the process of policy-making ‘out loud’, an approach that Global Affairs Canada has adopted in this area.

  • Human Rights and Innovation

The perceived tension between adopting a human rights-based approach to AI and innovation was raised, insofar as respecting and protecting human rights can be seen by some to hinder product development and distribution. One participant asked whether we can reframe the conversation away from ‘don’t hinder innovation’ to ‘don’t hinder human rights’. A question was also raised about what constraints can be placed on businesses to prioritise societal concerns over profits.

Ansgar Koene suggested that legislation provides a positive challenge to structure innovation around, and that regulation speaks more to guiding innovation than halting it. Jean-Yves Art flagged the need to work through product development step by step, seeking guidance, testing the oversight systems in place, obtaining feedback and then, when appropriate, applying hard rules. He suggested that companies would welcome guidance on how human rights apply in the digital age, noting the benefits of multi-stakeholderism in this regard.

Tara Denham suggested building a narrative framed around trust that appeals to a range of actors. This narrative could be based on internationally agreed frameworks to ensure that technology development includes oversight of human rights protection. Tara stressed the fast pace of response required and questioned the degree of error that society is comfortable with, highlighting policymakers’ unwillingness to make policy that has a negative impact.

Vidushi Marda urged participants to consider how human rights are viewed by governments and constitutions in different parts of the world when asking this question, noting that some governments lean more towards showing economic prowess than upholding human rights. She stressed that different responses are needed where countries prioritise the economic development opportunities of AI over human rights considerations.

Closing Remarks

Jean-Yves Art suggested that including human rights training in the education curriculum of software engineers would contribute to anticipating and addressing human rights risks. Tara Denham outlined the layers of societal effects of AI, noting that data collection raises privacy issues. She questioned what a neutral body would look like to be able to make decisions in this space. Ansgar Koene noted that the more a system is automated, the more ‘value’ is assigned only to those things that are measured. There needs to be clarity in organisational structures and policies, and frontline workers must be able to take the initiative when they have access to other sources of information aside from those that the AI systems are measuring. Vidushi Marda acknowledged that the human rights framework is not perfect, but stressed that it is universal and can inform how society approaches AI conversations. She questioned how to include fairness, accountability and transparency within the human rights discourse and to merge language that does not necessarily align.


Disclaimer: The views expressed herein are the author(s) alone.