Facial Recognition Technologies in the UK: one step forward to predicitve policing

Aggiornamento: nov 22

‘Predictive Policing’ might sound like movie fantasy at first sight however, it shall be considered one of the latest trends within the context of the so-called ‘Surveillance Society’.


With a focus on facial recognition technologies, if you believe they are merely utilised for commercial applications that could give you ID access to your electronic device or to pass the border on automatic passport checks when you are traveling abroad, again, you are probably underestimating this issue.


What is the status of art in the United Kingdom?


First of all, let’s provide definitions of the tech tools involved.


Facial Recognition Technology (FRT): it is the process by which an individual can be identified or recognised from a digital facial image. Cameras are used to capture these images and an FRT software produces a biometric template in which the individual is aware of the process undertaken (e.g. passport control).


Live Facial Recognition (LFR) or Live Automated Facial Recognition (LAFR): it is different and is deployed in a similar way to traditional CCTV. It is directed towards everyone in a particular area rather than applied to specific individuals. Data is collected in real-time and potentially on a mass scale. There is usually a lack of awareness, choice, or control for the individual in this process (e.g. public surveillance for security purposes).


Cultural and political implications of those technologies are out of scope in this article nevertheless, I would like to raise some awareness on fundamental rights implications and civil liberties currently threatened by law enforcement data-driven practices.


Comparing different jurisdictions on law enforcement and security policies is far from easy, cultural, historical and social factors are unique variables, therefore I would like to provide a snapshot of the current state of the art in the United Kingdom. Specifically, due to the abuse of live facial recognition for public security and surveillance purposes, we can evidence serious concerns on the protection of fundamental rights, meanwhile progressively moving towards a ‘Predictive Policing’ model of law enforcement.


Case study 1: A 14 years old black schoolchild victim of live facial recognition misidentification


In May 2019, a 14 years old black school child in his school uniform was walking down the road in London to attend his classes. He was wrongly identified by a facial recognition system installed for public safety, and immediately surrounded by four undercover police officers. He was questioned, arms held, asked for his phone, and even fingerprinted. After just ten minutes, the police had to release him due to a mismatch report with another person. The child was clearly shocked by the experience, nevertheless the Metropolitan Police – even admitted FRT and LAFR raise significant issues over gender and racial bias – have continued to use it on daily operations.


Case study 2: Innocent people with mental health problems


This case takes place again in London on Remembrance Sunday in November 2017. The Metropolitan Police took advantage of a project for live facial recognition to match dataset records of ‘fixated individuals’ - differently speaking people who frequently contact public figures and are highly likely to suffer from mental illness - despite not being suspected or wanted for any criminal activity. As a result, these individuals have been ejected from the ceremony without reasonable suspicion of any unlawful behaviour, but merely based on data-driven discrimination.


Full insights on both case studies are available on the Big Brother Watch’s report.


What is the current legal framework offered by the English Legal System?


If you try to find a clear legal basis that could justify these practices under the rule of law, well you will be quite disappointed.


No doubt that’s a complex scenario. A series of different issues arise from the abuse of facial recognition technologies such as mass screening of individuals in public spaces, violation of the right to privacy, and most definitely discrimination bias between different people.


And now, the first question from any lawyer would be: what is the legislation at current disposal to justify such measures? The answer is clear: there is no straightforward legal basis for the police’s use of live facial recognition surveillance in the UK.


Note one minor but fundamental detail: one thing is a surveillance camera – as defined in the Protection of Freedoms Act 2012 – and another is Live Automated Facial Recognition (LAFR), for which there is no reference whatsoever. Also, no mention of such technology on the Data Protection Act 2018, and despite its extensive application, it does not provide a basis in law for its use.


Let’s move step by step, running through some recent key developments.


Following a written question of Layla Moran MP to the Home Office about supporting legislation on the use of facial recognition and biometric tracking, the Minister of Policing Nick Hurd MP, responded in September 2017, and I quote: “There is no legislation regulating the use of CCTV cameras with facial recognition”. Subsequently, the police have claimed that its use was guaranteed as an application of the Protection of Freedoms Act 2012 and Data Protection Act 2018, however the