Police use of live facial recognition technology found unlawful
Country: United Kingdom
Forum: Court of Appeal
This case concerned whether the use of live automated facial recognition technology by a police force in the United Kingdom was lawful.
The police force deployed surveillance cameras that would capture digital images of members of the public. These digital images were then processed and compared with digital images of individuals on a “watchlist” that had been compiled by the police force.
The case was taken by a civil liberties campaigner, Edward Bridges, with the support of the well-known civil liberties organisation Liberty.
Edward Bridges argued that the use of live automated facial recognition technology was not compatible with the right to respect for private life under the European Convention on Human Rights and violated data protection and equality legislation.
The High Court rejected Edward Bridges’ arguments.
The High Court decision was then appealed to the Court of Appeal, which found that the High Court had been incorrect in some of its findings. The Court of Appeal concluded that the legal framework for the police use of automated facial recognition technology was insufficient, because it did not set out guidance on how to determine who would be included in watchlists and where the technology would be deployed. This left too much discretion to police officers. The Court of Appeal also found that the police had failed to comply with procedural requirements under data protection law and equality legislation.
Between May 2017 and April 2019, the South Wales Police in the United Kingdom deployed automated facial recognition technology on around 50 occasions under a trial programme. The technology was deployed at large public events.
Edward Bridges, a civil liberties campaigner, believed his face had been scanned by this automated facial recognition technology. One occasion was during a deployment on a shopping street in Cardiff city centre, and the other occasion was during a protest outside the Defence, Procurement, Research, Technology and Exportability Exhibition at Motorpoint Arena, Cardiff.
On both occasions, he had been unaware that automated facial recognition technology was being used until he was in close proximity to police vans equipped with the technology. It was not possible for the police to verify whether Edward Bridges’ face had, in fact, been scanned using the technology.
How automated facial recognition technology works
This technology, in simple terms, was a way of assessing whether two facial images depict the same person. This was achieved through the following process:
Watchlist: automated facial recognition technology needs a database of existing facial image data against which to compare new facial images and the measurements of facial features contained in them. The watchlist used by the police in this case was such a database, and the facial images in this database were processed so that the facial features of those in the images were extracted and expressed as numerical values.
Taking a Facial Image: surveillance cameras would be used to take digital pictures/footage that contained facial images in real time.
Detecting a Face: software is then used to detect and isolate individual human faces.
Comparing Faces: software then compares the extracted measurements of facial features with those contained in the watchlist.
Matching: when the facial features of two images are compared, the software generates a “similarity score.” The higher this score, the greater the likelihood that there is a positive match between two faces. A threshold value is applied to determine when the software will flag that a “match” has occurred. This threshold is generally suggested by the manufacturer and depends on the intended use of the software.
Police use of automated facial recognition technology
The South Wales Police deployed this technology through the mounting of CCTV cameras on police vehicles, poles or posts so as to capture images of the face of anyone passing within range of the cameras.
Watchlists were created from images held on a database maintained by the police force as part of their general activities, and they would be curated specifically for each deployment of automated facial recognition technology.
If, during the deployment of the technology, a possible match had been identified by the software, the two images would be reviewed by the “system operator” (who was a police officer). This person was to establish whether they believed that a match had been correctly made. If, after reviewing the images, the police officer did not believe a correct match had been made, no further action would be taken. If, however, they believed there was a correct match, other officers stationed nearby would be notified and they would intervene (e.g. by asking to speak to the person concerned).
There was no official limit to the number of persons whose faces could be scanned by the technology during any given deployment. Instead, the police sought to process the data of as many individuals as possible. The overwhelming majority of these individuals were not suspected of any wrongdoing and were not otherwise of interest to the police.
If no match had been made in relation to an individual whose face had been scanned, their image and facial biometrics were immediately and automatically deleted. That data was not accessible to any police officers. If a match had been made, the biometric template of that individual’s face would be immediately and automatically deleted, but their facial image would be retained for a maximum of 24 hours. A report of the match, which would include personal information on the individual, would be retained for 31 days.
The CCTV feed from a deployment would be retained for 31 days after the deployment before it would be automatically deleted. The watchlist used during a deployment would be held for a maximum of 24 hours after the deployment.
When the police used the technology at an event, they would take steps to inform members of the public about its use in the area. They would do this through social media posts, large signs in the vicinity of the cameras, and leaflets that were handed out to members of the public at events. While deployment was never covert, it was reasonable to presume that large numbers of people whose faces had been scanned by the technology would have been unaware that it had taken place.
Edward Bridges took a case challenging the use of automated facial recognition technology against him on two occasions, as well as the ongoing use of the technology in his area, on the basis that it violated the rights to respect for private life, freedom of expression, and freedom of assembly. He also maintained that it breached data protection and equality legislation. Despite the fact it was not possible to verify whether Edward’s face had been scanned by the technology, all the parties were willing to assume that it had.
The case was first heard by the High Court, which rejected Edward’s claims. Although it found his right to privacy had been engaged by the use of the technology, it concluded that such an interference was justified under human rights law. It reasoned that the use of the technology was “in accordance with the law,” and was a proportionate interference with his right to privacy.
It also found that, although deployment of the technology involved sensitive processing of biometric personal data, there had not been a breach of data protection law. It also found that the police adequately carried out an assessment of the possible discriminatory impact of the technology.
The case was then appealed to the Court of Appeal. The Court of Appeal was tasked with examining whether the High Court had made any errors in its findings.
Article 8 of the European Convention on Human Rights
1. Everyone has the right to respect for his private and family life, his home and his correspondence.
2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.
Section 64 of the Data Protection Act 2018 - Data Protection Impact Assessment
1. Where a type of processing is likely to result in a high risk to the rights and freedoms of individuals, the controller must, prior to the processing, carry out a data protection impact assessment.
3. A data protection impact assessment must include the following —
(b) an assessment of the risks to the rights and freedoms of data subjects;
(c) the measures envisaged to address those risks;
Section 149 of the Equality Act 2010 - Public Sector Equality Duty
1. A public authority must, in the exercise of its functions, have due regard to the need to—
(a) eliminate discrimination, harassment, victimisation and any other conduct that is prohibited by or under this Act;
(b) advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it;
(c) foster good relations between persons who share a relevant protected characteristic and persons who do not share it.
There was no dispute on the finding by the lower court that the right to privacy had been engaged in this case. It was evident that the use of automated facial recognition technology entailed the processing of biometric data, enabling the unique identification of individuals with some accuracy.
Instead, there was a question as to whether the lower court was correct in finding that the interference with the right to respect for private life was justified under human rights law. This involved an assessment of whether the interference was “in accordance with the law,” in pursuit of a legitimate aim, and was proportionate.
Sufficient legal framework
The Court first considered whether the lower court was correct in finding that the interference with the right to privacy occasioned by the use of automated facial recognition technology was “in accordance with the law.” To be “in accordance with the law,” the interference must not just have some basis in domestic law. The legal basis itself must be accessible and afford adequate legal protection against it being used arbitrarily.
The Court agreed with many of the findings of the lower court on the legal framework, which consisted of the Data Protection Act, the Surveillance Camera Code of Practice and the South Wales Police’s local policies. However, it found this legal framework insufficient to constitute “law” under Article 8(2) of the European Convention on Human Rights due to two deficiencies.
In reaching this conclusion, the Court was willing to take a “relativist” approach to the question of whether a legal framework was sufficient under Article 8(2) of the European Convention on Human Rights. Meaning that the more intrusive the measure, the more precise and specific the law had to be to justify it. In determining the level of intrusiveness, the Court noted that it involved the use of a novel technology that captured personal data of members of the public on a large scale (where a majority of individuals were of no interest to police) and concerned the processing of biometric data in an automated way.
The two deficiencies that the Court identified in the legislative framework were the lack of criteria or guidance on (i) who can be placed on a watchlist and (ii) in what locations the technology could be deployed. This left too much discretion to individual police officers when deploying the technology.
The Court also stated that it hoped that the automatic deletion of the data of individuals where there was no match would be set out in the policy documents as a requirement, rather than simply a feature of the system.
The Court then looked at whether the lower court was correct in finding that the interference with the right to privacy occasioned by the use of the facial recognition technology was proportionate.
In finding that a fair balance had been struck between the interference with Edward Bridges’ right to privacy and the interests of the community, the lower court took into account that the technology:
was deployed in an open and transparent way with public engagement;
was used for a limited time and covered a limited footprint;
was deployed for the specific and limited purpose of seeking to identify particular individuals who may have been in the area and whose presence was of justifiable interest to the police; and
was not used to wrongly arrest anyone and was not used in a way that disproportionately interfered with anyone’s right to respect for private life.
The lower court concluded that the interference with Edward’s right to respect for private right was limited to “the near instantaneous algorithmic processing and discarding of” his biometric data.
It was disputed on appeal whether the lower court erred in only taking into account the impact of the deployment of the technology on Edward’s privacy, rather than also taking into account the impact on all other members of the public who would have been at the locations were the technology had been used.
The Court of Appeal dismissed this argument. In doing so, it opined that the impact on the right to respect for private life of each of the other members of the public who were in an analogous situation to Edward Bridges was as negligible as the interference with his right. The Court reasoned that “[a]n impact that has very little weight cannot become weightier simply because other people were also affected.”
Data Protection Impact Assessment
The Court then went on to consider whether the lower court was correct to find that the South Wales Police’s Data Protection Impact Assessment was compliant with the requirements of data protection law.
The Court noted that the Data Protection Impact Assessment was flawed as it proceeded on the basis that the right to respect for private life was not engaged or infringed by the use of the facial recognition technology. Taking into account the deficiencies in the legislative framework identified by the Court of Appeal, the Data Protection Impact Assessment had failed to assess the risks arising from these deficiencies and address the measures envisaged to address these risks.
Public Sector Equality Duty
Finally, the Court considered whether the lower court was correct in finding that the South Wales Police’s Equality Impact Assessment complied with the Public Sector Equality Duty under the Equality Act 2010.
The Court was referred to existing evidence that facial recognition software can be biased and create a greater risk of false identification in the case of people from black, Asian and minority ethnic backgrounds, as well as in the case of women. However, it was not clear whether the technology used by South Wales Police had this effect.
The Court noted that the Public Sector Equality Duty was a duty of process rather than outcome. Good processes are more likely to lead to better informed, and therefore, better outcomes. Furthermore, whatever the outcome, good processes helped to make public authorities accountable to the public. The Court observed that the Public Sector Equality Duty helped “to reassure members of the public, whatever their race or sex, that their interests have been properly taken into account before policies are formulated or brought into effect.”
It went on to state that the duty required that public authorities take reasonable steps to make enquiries about what may not yet be known to them about the potential impact of a proposed policy on people with relevant characteristics, such as race or sex.
The South Wales Police tried to rely on the fact that there was human review in the process of using facial recognition technology, to help ensure false positives did not lead to police intervention. However, the Court found this to be irrelevant to the question of whether the Public Sector Equality Duty had been met.
The Court acknowledged that there was no evidence that there was any reason to think that the technology used by South Wales Police had any bias on racial or gender grounds. Nonetheless, the Court also found that the South Wales Police had never sought to satisfy themselves, either directly or through independent verification, that the software in this case did not have such bias.
For example, in order to check the racial or gender bias in the technology, the racial or gender profiles of the total number of people who were captured by the technology but whose data was then almost immediately deleted would need to be known. The Court also noted expert testimony that highlighted the difficulty South Wales Police would have in confirming whether the technology was biased when they did not have statistics on the database used to train the system.
The Court noted that the company that produced the technology would not divulge details of how the technology worked for commercial confidentiality reasons. Although this may have been understandable, the Court concluded that it did not enable a public authority to discharge its Public Sector Equality Duty.
The Court hoped that, as automated facial recognition technology is a novel and controversial technology, “all police forces that intend to use it in the future would wish to satisfy themselves that everything reasonable which could be done had been done in order to make sure that the software used does not have a racial or gender bias.”
The Court of Appeal declared that the use of automated facial recognition technology was not in accordance with the right to respect for private life under the European Convention on Human Rights, and that the South Wales Police failed to comply with their legal obligations in relation to the Data Protection Impact Assessment and the Public Sector Equality Duty. The South Wales Police have since stated they would not seek to appeal this decision.