GCHQ looks at how AI can protect the UK from threats
GCHQ has published a new paper explaining how its analysts could use AI responsibly to protect the UK from threats – from state-backed disinformation campaigns to cyber attacks.
The paper, Ethics of AI: Pioneering a New National Security, explains why the technology - enabling problem-solving at scale and speed - will be at the heart of the organisation’s mission to keep the country safe in an increasingly complex world.
Published as the government prepares to publish its Integrated Review into security, defence, development and foreign policy, the paper also details how GCHQ will ensure it uses AI fairly and transparently, applying existing tests of necessity and proportionality. This includes establishing an AI ethical code of practice to recruiting more diverse talent to help develop and govern our use of AI, protecting privacy and striving for systematic fairness.
As an example of how it plans to use the technology, GCHW examines how the National Cyber Security Centre could use AI analyse activity at scale to identify malicious software to protect the UK from cyber attacks.
Jeremy Fleming, GCHQ director, said: “AI, like so many technologies, offers great promise for society, prosperity and security. It’s impact on GCHQ is equally profound. AI is already invaluable in many of our missions as we protect the country, its people and way of life. It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cyber security.
“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ. Today we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”