The Bury St Edmunds Amnesty International Group and West Suffolk College co-organised a conference on Cybersecurity and Human Rights on 11 January 2018. The program brought together an interesting combination of perspectives and presentations, and I was pleased to represent the Human Rights, Big Data and Technology Project at this event.
In order of chronology, Sophie Ellis from the Cambridge University Institute of Criminology analysed the behaviour of online offenders from a psychological perspective, I addressed the opportunities and challenges in the digital age from a human rights perspective, Sherif Elsayed-Ali from Amnesty International shared a case study on a campaign of cyberattacks against Amnesty International to illustrate threats and tactics for human rights defenders, and Robin Herne from the University of Suffolk talked about the ethical issues of giving certain previously unspeakable (or more controversial) issues greater voice. A series of workshops were also conducted to address the practical elements of these issues. This blog post will summarise my interventions, key lessons from the workshops conducted, and some reflections.
From a brief show of hands at the start of my presentation, it was clear that regardless of the age diversity in the participant group, most people:
- Use and own mobile and smartphones, and computers, laptops or tablets
- Use the internet on a frequent – daily – basis
- Use at least one – mostly more – of the following: Facebook, Twitter, Snapchat, Instagram, YouTube, and WhatsApp
That was not surprising since most of us consume quite a lot of technology in our daily lives. The conveniences of modern technology are undeniable.
Challenges in the digital age: a human rights perspective
Technology is useful for some seemingly trivial everyday activities. These trivial activities, however, can reveal a great deal: who we talk to, when we talk to them, how frequently we speak, and what we speak about, everything we have ever searched for on the Internet, the links we click on, what we purchase and how much we spend, who our friends are on social media, who we engage in activities and check in at locations with, who we post photos of, how we type, and what we update or statuses we type but never post. If all of this information was aggregated and connected, it could create very comprehensive individual profiles.
Such information can then be used to make decisions about us and for us. Financial lenders look at social media behaviour to determine credit worthiness. Health data including sleeping habits, heart rates, and levels of activity are tracked and monitored not only by doctors and protected by doctor-patient confidentiality, but by wearable fitness trackers and stored in the cloud. Studies are being carried out to analyse the mental health of individuals from their web browsing habits. Law enforcement have used Facebook and Twitter to track protestors. Shopping habits are tracked over time and used to analyse shopping habits and predict future purchases, to the extent of one retailer predicting a customer’s pregnancy to deliver coupons for baby-related products. Entire neighbourhoods are disadvantaged on the basis of information about where they live – excluded from quality broadband internet access, charged higher car insurance premiums, systematically discriminated in law enforcement as a result of predictive policing models perpetuating historical bias in policing patterns, and denied fair consideration in employment opportunities for being too far from city centres. As such, certain uses of technology can affect some particular groups more than others. Most recently, revelations about the harvesting of social media profiles used for psychographic profiling to drive political campaigning and influence elections point to the potential impact that such data can have on individuals and societies.
Technology is a big part of the world we live in and a fixed feature of the present landscape. It is difficult to imagine a world without modern technology. Technology may be beneficial not only for daily lives, but also make positive contributions for the realisation of our human rights – accelerate the delivery of healthcare, assist humanitarian responses, provide eyewitness media. It is equally difficult to imagine a world where pervasive and invasive technology is allowed to negate our human rights. The challenge is, how do we reconcile these? No one approach is adequate, and it requires consideration of a wide variety of issues at various levels.
How to be smart in the digital age
I suggested in the workshop on ‘How to be smart in the digital age’ that on an individual level, we need to start by thinking critically about our relationship with technology. We use technology in a variety of ways. What information do we share? How comfortable are we with the way we share our information? When do we consent to sharing our information and how meaningful is that consent? What can we do? Asking the right questions comes before getting answers.
These questions raised concerns from workshop participants about third parties gaining access to data, problems with the ‘nothing to hide, nothing to fear’ perspective, and questions of how one can trust the entities we choose to share our information with. In particular, some participants highlighted that one might not even know, much less choose to share information about themselves alluding to the issues with informed consent. One can be affected not only by the factual information they share but also by the inferences derived about them. There was significant concern about how little control we have about what is known about us, which begs the question as to what we can do as individuals. It is difficult to dispute that one should be proactive about exercising good digital security, but fundamentally, the burden cannot and should not be on the individual. Regulators need to respond to these evolving developments in technology, to ensure that effective mechanisms are in place and implemented so as to ensure that individuals’ human rights are protected. These connect directly to the obligations and responsibilities of states and businesses for the protection of human rights.
Since cyber security involves the protection of data, programs, networks, and systems from attacks, Sherif’s workshop on ‘how the internet works and protecting your data’ connected directly to these questions as well. The practical steps suggested and discussed in both our workshops have been compiled in the infographic below.
Technology is a constant feature of our lives, even if it varies in form and frequency. The common issues relating to technology, cybersecurity, and human rights create conversations that close generation gaps and other differences. These issues are not limited to specialist knowledge or a specialist audience. Anyone can question the problems with cybersecurity, appreciate the nuances and complexities of these issues, and challenge problematic assumptions, practices and policies. To find some assurances for adapting to the new challenges or new iterations of existing problems brought about by rapid innovation, the guarantees in the human rights framework remain relevant and important.
Disclaimer: The views expressed herein are the author(s) alone.