Facial Recognition: Facing up to terrorism

The EU Commission intends to define rules for the public use of artificial intelligence (AI) by setting precise boundaries for systems which may present risks for the rights to data protection and privacy.

AI systems intended to be used for remote biometric identification of persons in public places will be considered high risk and will be subject to a third-party conformity assessment, including documentation and human oversight requirements by design. Although, there will be ‘serious’ exceptions to this prohibition, such as terrorism investigations, finding missing children and public safety emergencies, as officers will need to take urgent action.

However, there are still misconceptions about the validity of facial recognition —both the technology itself and its deployment— which may entice some police authorities and organisations to limit or withdraw the use of Facial Recognition Technology (FRT) altogether. This article seeks to explore the use of FRT to combat terrorism, and how organisations operating in the supply chain ought to develop and utilise FRT to enhance public confidence and fit in line with the new EU’s proposals.

The need for FRT to combat terrorism
In a hostile world, terrorism risks are increasing. These risks pose a significant threat to not only national security, but to political and social stability and economic development. The utilisation of facial recognition solutions can play a key role in improving the efficiency of police forces, intelligence agencies and organisations to respond and prevent major attacks, in a way that minimises intrusiveness for citizens.

In general, FRT is a biometric surveillance aid which uses a camera to capture an image of an individual’s face, mainly in densely populated places such as streets, shopping centres, and football arenas. It is then able to provide a similarity score when it recognises a similarity between a facial image captured, with an image held within a criminal database. If a match is made, an alarm will inform the security operator to do a visual comparison. The operator then can verify the match and radio police officers and have them conduct a stop, if one is needed. It is important to note, the technology does not establish individual ‘identity’ – that is the job of humans.

Moreover, many terrorists are not known to a database and can move around populated spaces largely unnoticed. With the advancement of AI, these surveillance systems can now monitor patterns of irregular behaviour, such as someone leaving a bag unattended for a long period of time or returning to a site regularly to take photographs. This information can then be used as the basis on which to perform actions, e.g. to notify officers to conduct a stop search or to record the footage. Security officers need access to this type of intelligence, to secure the perimeter of their facility and ultimately save lives.

Privacy remains a top priority
It is essential, in this particular case, that privacy remains a top priority when utilising FRT. Privacy advocates, and AI sceptics suggest FRT can be hijacked for nefarious purposes, including unlawful surveillance. But what these sceptics are not aware of are the measures that can be put in place to ensure the privacy of individuals captured by Automatic Facial Recognition (AFR) cameras is protected. This can include anonymising by passers in the camera’s field of view without obscuring movements or encrypting original footage so that only authorised users can access sensitive data. 

Moreover, it is the responsibility of FRT developers to implement internal policies that clearly stipulate they will not partner with end users who do not act upon the importance of privacy and security in the implementation and operations of their systems. It is also equally important that these customers are properly prepared, trained and competent to use FRT lawfully.

It must also be said, where FRT is being used under a Directed Surveillance Authority, the UK has one of the strongest and globally respected regulatory authorities under the Investigatory Powers Commissioner. This provides for oversight, restriction of collateral intrusion and greater provision of accountability for surveillance operations of this nature.

Accuracy is improving
In addition, while face masks have helped reduce the spread of Covid-19, they have also become a significant security threat. Security officials have raised concerns that facial recognition cameras will not be able to identify terrorists because they can blend into crowds and hide their faces with a mask. In fact, as recently as July 2020, NIST flagged that even the best of the facial recognition algorithms studied were unable to correctly identify a mask-wearing individual up to 50 per cent of the time.

However, these systems no longer have to operate best when environmental factors are controlled, meaning that extreme angles, occlusion and contrast in lighting can now be compensated for. It must be pointed out, that with most technology, FRT is advancing at rapid rates. Indeed, as of January 2021, facial recognition algorithms can now correctly identify individuals up to 96 per cent of the time, regardless of if they are wearing protective facial coverings or not.

Moreover, however flawless our technology is when it is designed and produced, it can of course be abused when operated by an oppressive end user. Where inadequately regulated in a democracy, such dysfunction is a short ride away from dystopia. Developers must therefore work closely with end users such as police departments, to understand the user requirement and the legitimacy of endeavour. They must work collaboratively where necessary to enable and support client compliance to statutory obligations and to build appropriate safeguards where vulnerabilities may arise.

Indeed, anyone who develops machines which have an impact upon society carries a responsibility, to ensure that they are only used as a force for good and to the benefit of the society and communities which our technology may help to shape.

Closing thought
The entire supply chain must welcome the recent declaration made by the European Union to establish a pan European Data Governance Act and put further measures in place to ensure FRT is deployed and utilised in an ethical way.

It is our hope that this legislation will provide much needed statutory leadership in the establishment of clear rules and guidance by which the use of technologies such as FRT can be more confidently designed, produced and operated in a manner which maintains trust in safer societies. This will allow overstretched and under-resourced law enforcement agencies to fight terrorism and other serious crime without one hand tied behind their backs. L

Written by Tony Porter, former Surveillance Camera Commissioner, is the Chief Privacy Officer at Corsight AI – a leading Facial Recognition Solutions provider with unparalleled speed, accuracy and privacy protection.

Tony works across Corsight’s senior team to assist in further developing FRT to achieve best practice, legal compliance and to be best in class.


View the latest
digital issue