Harvesting AI for the good of cyber security in the UK
Cyber attacks have become a daily occurrence, almost expected in today's risk-aware society, but what of the future? Tim Williams believes that we will be engaged in cyber warfare for the rest of our lives and the eventual outcomes will all come down to AI
Who will be able to act the fastest? Who will form the most productive, behind-the-scenes and cross-border strategic alliances in order to facilitate an effective human impact on forward thinking to counter future cyber attacks? By maximising the application of artificial intelligence (AI), who can counter cyber terrorism by building the best mousetrap?
Living with AI and facing facts
How do we define terrorism? It has to be considered in context with people’s lives: one man’s freedom fighter is another man’s terrorist; cyber terrorism is just one facet of attacks on persons or facilities that are perceived as aggressive and potentially terrorism.
As technology becomes ever more efficient and refined, AI will be helpful to identify millions of attempts to attack our networks - but what of the negative effect? AI has been in use to some degree for decades, starting with the missile and anti-missile programmes in the 1950s and ‘60s. In the 1970s, the Yom Kippur war between Israel and its Arab neighbours eventually resulted in Israel directing groups of some of the earliest drones towards enemy firepower, drawing out their location so Israeli bombers could subsequently take out the opposition.
This trend will continue ad infinitum - attackers targeting specific military installations, law enforcement or intelligence communities. AI could conceivably, and will over time, be able to identify a specific person, in a specific crowd, through facial recognition - that is the challenge we have to contend with. We run the very real risk of creating a dangerous obsession with the physical aspects of cyber security – but the very fast evolution of AI means we need to be prepared to operate in a whole new dimension, because ironically our AI vulnerabilities are also going to be addressed through artificial intelligence.
AI is already helping us to identify millions of attempts to attack networks or organisations, and minute by minute is becoming more efficient. In fact, it’s a commonly held view that AI will take over many jobs currently undertaken by human workers - such as security or automated positions - although in turn there will also be a growing requirement for data experts and scientists to control and direct AI applications and functions.
Robots can mimic human behaviour, but currently AI is incapable of actually replicating many human traits, including conceptualisation and complex strategic planning or undertaking complex work that requires precise hand-eye coordination. Crucially, AI cannot interact with humans in the same way that other humans do: with empathy, human-human connection, and compassion. But do we have a handle on AI’s limitations, and its undoubted potential?
It’s all about the maths
AI is based on data - ultimately maths. We need to become far better at putting risk analysts and security experts together with the data scientists responsible for influencing development in a much more efficient way. Coders need help to understand how new technology can be maximised for positive use, and, in contrast, for negative purposes. The danger is always that devices we utilise for the benefit of security could also be used against us, expanding the universe of risk.
What about autonomous vehicles that create a whole new level of concern? There is certainly the potential to replace human terrorists and suicide bombers with remote-controlled technology, establishing a whole new threat level for world leaders.
An Australian mining group has recently unveiled what has been described as the world’s largest robot—a fully-automated rail network featuring a series of trains able to run completely free from human intervention, each making a round trip of around 800km in around 40 hours, including loading and delivering cargo. Balance against this the fact that in 2018 a runaway train in the same country travelled over 90km without its driver before being forcibly derailed, which contributed to the vehicle’s owner suffering multi-million pound costs, but thankfully no harm to life. That was an accident - imagine how devastating a calculated cyber attack could be, if any kind of autonomous vehicle were hacked.