Feature

Surveillance

Facial Recognition Technology in the fight against terror

By Tony Porter, previous surveillance camera commissioner for England and Wales and current chief privacy officer at Corsight AI

In his most recent review of London’s Preparedness for a Major Terrorist Attack published in March 2022, security expert Lord Toby Harris deemed London “significantly better prepared for a terrorist attack” compared to 2016. Despite the encouraging progress, Lord Harris notes that the last decade has seen the threat of terrorism shift from organised groups of attackers to lone extremists acting independently. Notably, police believe that the perpetrator of the 2017 Manchester bombing acted alone at the time of the attack. Like finding a needle in a haystack, the human eye remains ill-suited to identifying one suspect in a crowd. Technologies like Facial Recognition and Visual Search have long been at the forefront of the counterterrorism conversation, with the Met beginning operational use of Live Facial Recognition in 2020.

Beyond comparing subjects with existing criminal databases, advancements in AI allow surveillance systems to run visual searches monitoring patterns of irregular behaviour. For example, someone leaving a bag unattended for an extended period or returning to a site regularly to take photographs. This information can then be used as the basis on which to perform actions, e.g. to notify officers to conduct a stop and search. Such use cases of facial recognition can play a key role in improving the efficiency of law enforcement in identifying and pre-empting potential terror attacks.

Outdated concerns
Unsurprisingly, the deployment of Live Facial Recognition (LFR) by law enforcement has stoked furious opposition from campaign groups over concerns around the technology’s accuracy, privacy and bias. With fourteen organisations including Big Brother Watch, Liberty and Black Lives Matter UK writing an open letter to Metropolitan Police Commissioner Sir Mark Rowley requesting an end to the use of facial recognition technologies by the police force. Published in September last year, the letter characterises the technology as “privacy-eroding, inaccurate and wasteful.”

Concerns around inaccuracy, bias between gender and ethnic minorities, and the violation of privacy rights have historically taken centre stage in the public discourse around Facial Recognition Technology (FRT). Yet, these outdated narratives disregard FRT’s advancements in accuracy and bias elimination. They also fail to recognise the important checks and balances that are built into the way the technology is deployed.

Most mature facial recognition solutions on the market prioritise privacy and ethics, providing recommendations to governments on further legislation that can help ensure FRT’s proportionate and responsible use. Furthermore, nobody has ever been arrested or charged based simply on the decisions of a machine. There is always a ‘human in the loop’ to evaluate any potential matches flagged by the software. This individual will always be accountable for the decision on what happens next, just like has always been the case for any police officer, investigating any type of crime.

Individual views will differ on the exact circumstances on when FRT should be used. However, we should keep in mind that there is widespread public support for its use in counter terrorism operations, where it can save numerous innocent lives.

Facial recognition is highly accurate
In its most recent independent analysis of leading AI facial recognition software, the National Institute of Standards and Technology (NIST) observed an unprecedented performance of just 0.0001 variance between race and gender differences. To put this number into perspective, the acceptable false capture rate in Automatic Number Plate Recognition by UK law enforcement runs at +/- 3% on nationally accepted standards.

Ultimately, FRT is a tool at our disposal to help us filter through vast amounts of information, yet it should not be the only deciding factor when identifying suspects. Nearly all existing sensitive technologies apply a dual verification process; by designing FRT protocol around the concept of placing this ‘human in the loop’ as mentioned above, an operator can exert their judgement when reviewing FRT matches, thereby working together to account for both human and AI biases. Vendors and distributors of FRT must work closely with counterterror agencies to ensure that operatives are adequately trained in how to spot instances of bias, while taking the appropriate measures to safeguard privacy rights.

Reframing the debate
The public is increasingly aware of unethical and disreputable Nation States seeking to dominate the AI market. We need look no further than what is happening with CCTV cameras from China to understand the importance of trust and ethics in surveillance. However, to ascribe toxic human qualities of racism and intrusion to FRT unfairly stigmatises the technology behind it. The fact remains that the speed and accuracy of FRT has come a long way since its inception, with said advancements showing no sign of slowing down.

Following the horrific events of the 2017 Manchester bombing, a new piece of legislation known as “The Protect Duty” is soon to come into force, which imposes an obligation on the public and private sector to assess and take steps to mitigate the risk of terror attacks. Judicious application of video surveillance technology can help fulfil this duty by acting as a filter for law enforcement, helping draw attention to patterns of suspicious behaviour that warrants investigation. Given the potential of FRT in preventing the preventable loss of life, it seems irresponsible to dismiss such innovations based on concerns addressable through legal safeguards and processes.

Safeguarding privacy
It is essential that privacy remains a top priority when developing and using FRT. Yet like any other policing measure, legislative safeguards around legality, necessity and proportionality can be implemented to guarantee citizen rights and wellbeing. In the 2020 case R(Bridges) v Chief Constable of South Wales Police, the Court of Appeals recognised a sufficient legal framework within the UK legal system to enable the use of LFR. The court also stipulated police requirements for the lawful use of LFR (adherence to Public Sector Equality Duty requirements, continuous oversight of its DPIA and management of the watch list and positioning of the cameras). Such rulings are essential in establishing the precedent and rules around the use of FRT.

The FRT sector welcomes such international standards and government oversight. With NIST and International Standards Organisations currently working to harmonise their approaches to Trustworthy AI, the onus now falls upon lawmakers and regulators to establish certainty by determining what FRT should enable in society, who should be able to use it and to construct the rules (laws) which enable and constrain such use, as well as hold it to account. Only then can FRT surveillance fulfil its true potential, not just in safeguarding our physical wellbeing, but also our rights and peace of mind.

Partners

View the latest
digital issue