Future framework for facial recognition software in London
City Hall’s independent panel that advises on the ethics of policing has set out new guidelines on how facial recognition technology should be used by the Metropolitan Police in London.
The Met Police has carried out 10 trials using facial recognition technology across London as part of efforts to incorporate the latest technologies into day-to-day policing. Facial recognition software is designed to check people passing a camera in a public place against images on police databases.
The independent Ethics Panel has published a comprehensive final report which recommends that live facial recognition software should only be deployed by police if five conditions can be met: the overall benefits to public safety must be great enough to outweigh any potential public distrust in the technology; it can be evidenced that using the technology will not generate gender or racial bias in policing operations; each deployment must be assessed and authorised to ensure that it is both necessary and proportionate for a specific policing purpose; operators are trained to understand the risks associated with use of the software and understand they are accountable; and both the Met and the Mayor’s Office for Policing and Crime develop strict guidelines to ensure that deployments balance the benefits of this technology with the potential intrusion on the public.
The research was informed by an examination of Londoners’ views on the police’s use of live facial recognition technology. More than 57 per cent felt police use of facial recognition software was acceptable, but this figure increased dramatically to around 83 per cent when respondents were asked whether they supported using the technology to search for serious offenders. Although half of the respondents thought the use of this software would make them feel safer, more than a third of people raised concerns about the impact on their privacy.
Suzanne Shale, who chairs the London Policing Ethics Panel, said: “Our report takes a comprehensive look at the potential risks associated with the Met’s use of live facial recognition technology. Given how much of an impact digital technology can have on the public’s trust in the police, ensuring that the use of this software does not compromise this relationship is absolutely vital.
“To reduce the risks associated with using facial recognition software, our report suggests five steps that should be taken to make sure the relationship between the police and the public is not compromised. We will be keeping a close eye on how the use of this technology progresses to ensure it remains the subject of ethical scrutiny.”