How Does Content Moderation Affect Human Rights? Commentary on the Case of Infowars

How Does Content Moderation Affect Human Rights? Commentary on the Case of Infowars
August 17, 2018 Vivian Ng

On 6 August 2018, Apple, Facebook and Spotify removed content from Alex Jones’s Infowars pages and accounts on their platforms, which were seen to be spreading conspiracy theories and hate speech. YouTube also terminated Alex Jones’s channel. These recent actions followed the takedown of four of Infowars’ YouTube videos earlier last month. While Twitter did not immediately take any action, they later suspended Alex Jones’s account for seven days on 15 August, citing violation of their rules on abusive behaviour and inciting violence. These companies have removed content or terminated these accounts on grounds that they violated the terms of service.

Much of the reporting has been critical of companies who are perceived to have not done enough, or acted quickly enough, to remove content from Alex Jones and Infowars. The attention seems to centre on whether such platforms have acted appropriately and adequately to combat misinformation and disinformation spread by entities like Infowars. For example, while Twitter has since taken action regarding Alex Jones’s account, it had been criticised for not suspending the separate Infowars Twitter account as well. Google and Apple have also been criticised for not removing the Infowars app on their app stores. These issues are important but commentary has been lacking on the broader significance of the current news around the actions platforms are taking regarding the content and accounts of Infowars and Alex Jones. More fundamentally, what role do companies have in content moderation, and how should that role be carried out? This post will look at the role that social media platforms play in the realisation of the right to freedom of expression in particular, and consider if and how content moderation by private companies that own and control such platforms can be compliant with human rights standards and norms.

The Connection between Social Media Platforms and Human Rights

Social media platforms have become the space in which individuals and groups commonly access information, interact, and organise. In effect, these private actors are increasingly the arbiters of speech, regulating what is acceptable content in the ‘modern public square’. This directly implicates the right to freedom of opinion and expression, amongst other rights such as the right to freedom of association. Codified in the International Covenant on Civil and Political Rights, the right to freedom of opinion and expression includes the right to seek, receive and impart information and ideas of all kinds. This is critical for the development of the individual as well as of a democratic society. The right to freedom of expression may be restricted, but any restrictions are subject to narrow limitations. These are namely in relation either to respect of the rights or reputations of others, or to the protection of national security or of public order.

International human rights law traditionally governed State conduct, but human rights standards and norms have been recognised to also apply to business enterprises through various routes. The United Nations Guiding Principles on Business and Human Rights, or Ruggie Principles, have provided guidance on obligations of States and businesses in regard to business and human rights. The Ruggie Principles are composed of three pillars: The state obligation to protect human rights from third party harm, (2) the corporate responsibility to respect human rights, and (3) when human rights abuses have occurred, access to effective remedy must be given.

According to the Ruggie Principles, businesses should avoid causing or contributing to adverse human rights impacts (principle 13), conduct due diligence (principles 17-19), engage in prevention and mitigation strategies (principles 23) and conduct ongoing reviews of their human rights policies and practices (principles 20-21). The corporate responsibility to respect human rights is primarily a negative obligation requiring businesses to refrain from interfering with human rights harms through their activities.

Content Moderation by Private Platforms: Standards and Compliance with Human Rights

The increasing dependency on these platforms for access to information and public discourse has resulted in increasing pressure on the companies that own and control these platforms to do more. This is against the backdrop of greater attention and efforts against hate speech and incitement to violence online. Should companies moderate content, and how should they be doing so? These considerations are relevant not only to content that are misinformation, disinformation, hate speech, and incitement to violence for example, but to all kinds of content that any user engages with on any platform.

Social media platforms such as Facebook, YouTube, and Twitter have each developed their own terms of service, which are their particular interpretations of what constitutes unacceptable content on their platforms. The owners and controllers of such platforms argue that enforcement of their content moderation policies is necessary to ensure they can keep online communities a safe space for their users. A key issue is such content moderation is based on varying standards that are not necessarily compliant with the existing human rights framework. Implementation of such internal guidelines for the moderation of content, for example in the removal of content, can thus lead to arbitrary interferences with the right to freedom of expression and censorship.

David Kaye, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, has emphasised in his most recent report the increasing role that businesses are playing in policing the online space. He has also cautioned that company policies on hate, harassment and abuse, similar to counter-terrorism legislation, are often vague and result in inconsistent policy enforcement.

The human rights framework should be the authoritative standard for companies’ policies and practices regarding content moderation, and to ensure that they respect the right to freedom of expression amongst other rights. Furthermore, a radically different approach to transparency at all stages of the operations of such companies is needed with independent oversight to ensure that decisions of content moderation are guided by the conditions of legality, necessity, proportionality and legitimacy. According to David Kaye, companies must in this way also open themselves up to public accountability. This does not displace the State’s responsibility to protect against human rights abuse from third parties including private companies that own or control platforms, and moderate content. Regulation must focus on ensuring greater transparency, effective oversight, and remediation.

It is easy to jump on the bandwagon calling for stricter regulation of content propagated by entities such as Infowars. It is important, however, to recognise that advocating for private companies to remove content and suspend or terminate accounts of certain individuals and entities, is not an isolated and one-off issue limited to controversial views. The content moderation policies and practices of technology giants on their platforms affect the exercise of the right to freedom of expression and opinion online, and human rights should be the bedrock of such governance.


Disclaimer: The views expressed herein are the author(s) alone