Law enforcement

Analysing the implications of police data and technology on human rights

Predictive analytics and other data-driven policing innovations offer immense opportunities for violence prevention, crime reduction, and efficiencies. But basic human rights are not at the forefront of software and tech design.

Big data and law enforcement

Big data is transforming law enforcement. Predictive technologies can forecast crimes before they are committed. Data on the location of crime is shaping and mapping out where police patrol and intelligence systems can highlight offenders who are responsible for a disproportionate or large number of offences.

In terms of violence prevention, the benefits are immense. For example, Columbia has seen an 82% decrease in murder rates since police started using big data; mapping crimes, compiling more reliable homicide numbers and gathering information on precisely how murders were committed. This large-scale knowledge allows for arguably well-informed predictions of future criminal acts, provoking the ideology that they can generally be stopped before they are carried out.

However, communities often have little or no idea of why and how digital and technological products are used to police and protect them. This raises concerns for the democratic oversight and authorisation of these applications.

The threat of human bias and discrimination

Some of the factors used in predictive policing may include prior convictions, income level, employment status, family background, neighbourhood, education level, and the behaviour of family and friends. Because of some of these elements, particularly family background, the data is racially encoded. This leaves it subject to committing racial bias or discrimination, dependent on the individual processing it.

The threat to privacy

Often communities undergoing big data ‘tests’ are unaware that they’re being surveilled. In one of the biggest, in New Orleans, predictive policing was used to utilise sophisticated data mining tools from Silicon Valley’s Palantir. This aimed to uncover ties to other gang members, outline extensive criminal histories and even predict future gang members. The council were not told, and no consent was given as the test was masked as a ‘philanthropic venture’. This violated the privacy of all the citizens involved in the processing. In addition, the factors mentioned previously also contribute to potential violations of privacy.  For example, innocent citizens called for questioning because their cousin is a known drug-dealer – information the police would have gained from big data – even though they have no criminal background or reason to be a suspect.

Judges using big data for sentencing decisions

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a form of artificial intelligence that purports to predict a defendant’s risk of committing another crime. COMPAS works through a proprietary algorithm.  This is the same type search engines use to display the most relevant results from their search index for specific queries. The system was used in the US in 2013 when a male, Eric Loomis, was found driving a car that was used in a shooting. COMPAS identified him as at high risk of re-offending, and sentenced him to six years. Loomis appealed the ruling on the grounds that the judge violated due process in considering the outcome of an algorithm whose inner workings were secretive and could not be examined. The appeal went up to the Wisconsin Supreme Court who ruled against Loomis, concluding that the sentencing would have been the same without consulting COMPAS. However, this urged caution and skepticism in the algorithm’s use.

The best intelligence is human

It’s widely accepted that intelligence work is the most effective form of counter-terrorism, and that the best intelligence comes from community engagement, not coercion.

For example, the arrest in 2008 of Andrew Ibrahim for intent to commit terrorism followed tip-offs from Bristol’s Muslim community. Detective work plays the key role in identifying terrorists after attacks. Despite the repeatedly shown surveillance camera footage of the 7/7 bombers at Luton station, it was forensic examination of corpses and intelligence from the missing persons helpline that identified them.

What public evidence there is on anti-terrorist investigations demonstrates the overwhelming importance of community tip-offs and informants. One of the most robust studies concluded that information from these sources initiate 76% of anti-terrorist investigations. This analysis of 225 individuals recruited or inspired by al-Qaeda revealed that “the contribution of NSA’s bulk surveillance programmes to these cases was minimal”, playing an identifiable role – with the most generous interpretation of the results – in just 1.8% of cases. In conclusion, the vital importance of traditional investigative and intelligence methods is undeniable.

Accountability and appropriate remedies

Andrew Guthrie Ferguson, the author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, believes that the way to maximise the benefits of the data revolution and minimise costs, is through a constant process of public accountability. As well as that, he says: “Big data technologies will improve the risk-identification capacities of police but will not offer clarity about appropriate remedies.” We want to have clarity with risk identification and potential human rights abuses so that we can discover appropriate remedies that will reform the software so it can be used safely and not cause any risk to any person’s human rights. Not only that, but to also push accountability to those designing and using the software and for due diligence at every stage of the algorithm creation process.

Our research

Through qualitative empirical research with police forces in the UK and USA, collaborative workshops, discussions and public engagement, we analyse the actual implications of police data and technology on human rights, and push for more human rights-compliant policing.

 

Latest Posts

  • May172018

    Police are using big data to profile young people, putting them at risk of discrimination

    Originally published in The Conversation on 16 May 2018. Amnesty International has raised a series of human rights issues in…

    Read more
    HRBDT
  • Dec012017

    HRBDT at Policy Roundtable on Application of Responsible AI: Opportunities for Policing

    On 17 November 2017, Pete Fussey participated in a Reform policy roundtable discussion in London on ‘Application of Responsible AI:…

    Read more
    HRBDT
  • Nov012017

    Artificial Intelligence, Big Data and the Rule of Law

    On 9 October 2017, Lorna McGregor participated in a debate on ‘Artificial Intelligence, Big Data and the Rule of Law’,…

    Read more
    HRBDT
  • Aug082017

    Vicarious Trauma of the Private Counter-Terror Workforce: Extending the Duty of Care

    Communities used to gather on street corners, sidewalks, parks and public squares. Today, social media platforms are increasingly the forum…

    Read more
    HRBDT
  • Jul272017

    Amnesty International’s Tanya O’Carroll on privacy & the ‘nothing to hide, nothing to fear’ argument

    I recently interviewed Tanya O’Carroll, a Technology and Human Rights advisor at Amnesty International, to discuss government surveillance and its…

    Read more
    HRBDT
  • Apr182017

    The Police’s Data Visibility – Part 2: The Limitations of Data Visibility for Predicting Police Misconduct

    Editor’s note: This is the second of a two-part blog post examining the potential impact of data visibility on law enforcement. In…

    Read more
    HRBDT

Researchers

Publications

Publications

    Our Partners

    Queen Mary University of London
    University of Cambridge
    Eye Witness Media
    Universal Rights Group
    World Health Organisation
    Geneva Academy