Facial Recognition Technology: A tool for 21st century policing

policing
© Kriscole

Suzanne Gallagher, Associate, BCL Solicitors LLP, explains the current legal position in the balance between human rights, privacy and the use of biometric technology by the police and other groups

Facial Recognition Technology (‘FRT’) is being increasingly adopted by law enforcement, equipping police officers with digital tools that were previously only imagined in science fiction.

To many people, police officer use of modern technology to tackle serious crime, such as terrorism or child abduction is a ‘no brainer’. To others, the adoption of these technologies without a robust regulatory framework has thrown up serious concerns over privacy, fairness and human rights.

What is algorithmic policing?

Algorithmic policing technology falls into two broad categories: predictive technology and surveillance technology. FRT is the latter, automating the collection and analysis of data. ERT is another related example of this technology. ERT is a form of digital phrenology which analyses facial expressions to try to decode an individual’s mood and – apparently – their intentions. ERT is thought to be used to detect the state of mind of members of the Uyghur population in the Xinjiang Province of China, and Lincolnshire Police are reported to have recently received UK Government funding to trial the controversial technology in their area.

From identifying missing individuals to solving ‘cold cases’ by scanning CCTV footage, law enforcement is reaping the benefits of FRT. However, its rapid development and deployment is also causing unease. US studies have shown that some FRT algorithms can be up to 34% less accurate in recognising non-Caucasions than Caucasions.(1) When tensions between the police and some ethnic groups are fragile, the risk of misidentification and miscarriages of justice, and the consequential erosion of confidence in law enforcement, is acute.

As algorithmic policing has become more prevalent, legislators have scrambled to keep up, adopting different approaches to its regulation. At one extreme is China, where FRT and ERT have been swiftly taken up and are now so pervasive that the country arguably borders on a surveillance state with minor offenders identified and punished and whole communities tracked and incarcerated. At the other end, San Francisco and other US cities have banned law enforcement from using FRT entirely.

In the UK, a ground-breaking case was brought by Liberty against South Wales Police for its deployment of automated facial recognition.(2) The case gave the Court of Appeal an opportunity to examine the sufficiency of the applicable legal framework (the Data Protection Act 2018, the Surveillance Camera Code of Practice and local police policies) to justify interference with the right to private and family life, the proportionality of that interference, and its compatibility with equalities legislation. Effectively laying down ground rules for future FRT use (and potentially for wider algorithmic policing methodology) the Court held that too much discretion should not be left to individual police officers about who should be placed on ‘watchlists’ and where cameras should be placed.

The European Commission has recently unveiled its much-anticipated proposal for an Artificial Intelligence Act (‘AI Act’) which, when eventually passed, will be directly applicable in all EU Member States and will exert significant extra-territorial pull (including in the UK if the country is not to jeopardise its dearly sought-after EU data adequacy agreement). Under the AI Act, ‘real time’ remote biometric identification of individuals in publicly accessible spaces for law enforcement purposes will be deemed so intrusive that it will be banned except where its use is strictly necessary for a number of specific purposes, such as targeted searches for missing children. In these situations, the use of technology must be necessary and proportionate. It must also be expressly and specifically pre-authorised by a judicial or independent administrative authority except in cases of extreme urgency where retrospective authorisation must be sought.

Away from legislatures and courts, the seemingly straightforward demarcation between public and private use of algorithmically-generated data is less clear. For example, where it is strictly necessary to achieve an important public interest, the police may provide private operators of FRT a ‘watchlist’ of persons of interest to law enforcement, or when private operators detect a person of interest using such technology, they may invite police intervention to apprehend those so identified. As quasi-public spaces proliferate, a number of private owners are installing algorithmically-driven biometric systems for efficiency or public safety purposes, providing the police with more opportunities to co-operate with private operators of the technology in their fight against crime. In such collaborations, though, lies a potential gap in the applicable legal framework. While private operators of such systems in the UK must comply with their obligations under the UK GDPR and applicable provisions of the Data Protection Act 2018, they are not bound by the strict rules described by the Court of Appeal in the South Wales case for police use of such technology.

In Europe, too, private operators must comply with the GDPR and any applicable national laws, but the forthcoming AI Act will permit them to use of ‘high-risk’ artificial intelligence, leaving open the possibility that law enforcement could tap into privately-held algorithmic data in future public-private collaborations.

Conclusion

Privacy campaign groups see FRT and similar technology as a dangerous incursion into our privacy. Public opinion seems more relaxed, particularly when it is weighed against the risk of terrorism or child abduction.

Algorithms – like humans – are flawed and mistakes will inevitably happen; it is incumbent on those developing and using the technology to take every precaution to reduce as far as possible the risk that such mistakes could lead to injustice. Rushing headlong into an age of high-tech policing without the protection of a widely accepted legal and regulatory framework could jeopardise confidence in law enforcement and the wider criminal justice system.

 

(1) https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/

(2) (R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058

LEAVE A REPLY

Please enter your comment!
Please enter your name here