Memes and Daesh propaganda: AI’s role in countering Daesh’s keyboard jihadists
Last year, a week before the UK entered lockdown, an online newsletter produced a one-page infographic on COVID-19, warning its readers to limit travel and exercise precautions like washing their hands to prevent the spread of the virus.
The messaging itself wasn’t peculiar, although its origin perhaps was; the source was al-Naba, a weekly propaganda newsletter produced by Daesh.
Daesh’s messaging on coronavirus changed as the scale of the global pandemic broadened. Initially, Daesh described the virus as an act of God to punish China for their treatment of the Uighur Muslims in the country’s Xinjiang province. As it spread to Europe, COVID-19 was characterised as divine retribution for the foreign policy of what the group terms the ‘crusader nations’.
As the pandemic progressed, however, the production of propaganda relating to the virus shifted from officially produced and centrally scripted messaging, like that within al-Naba, to content created by the group’s supporters. This propaganda has tended to have a localised appeal and more frequently suggests specific targets for lone actor terrorism during the crisis across Europe, Southeast Asia and North America.
The changing role of Daesh supporters
Propaganda remains a strategic imperative for Daesh. In 2020, the US State Department offered a $3m reward for information about Mohammed Khadir Musa Ramadan, the group’s propaganda chief. In their statement announcing the bounty on Ramadan, the State Department also released an interesting vignette about his role in the organisation, which they said included: ‘the management of content from ISIS’s dispersed global network of supporters’.
It’s a sign of the changing times: five years ago, the group would never have considered putting its propaganda in the hands of individual supporters.
But Daesh’s approach to communications has always been innovative and responsive to the demands of its audience. It was, after all, among the first terrorist groups to pioneer the effective use of social media. As online communications have shifted to a constant stream of disposable, bite-size content, Daesh has evolved its strategy accordingly.
Whatever the cause, it’s clear that more informal, ‘crowd-sourced’ social media propaganda plays an increasingly pivotal role in Daesh’s strategy. The question is, how do we fight such a dispersed wave of content?
How to disrupt supporter propaganda
Supporter propaganda is, by design, ephemeral. Daesh’s centrally-produced blockbuster video releases and glossy magazines are designed to be reshared and stand the test of time; supporter propaganda, by contrast, is easier to produce and designed to be tactical and disposable. This makes it much harder to use traditional low-tech approaches to remove online terrorist content, such as hashing or URL sharing.
But that doesn’t mean disruption of this content is impossible.
Just as we’ve previously developed AI that detects signals within officially-produced propaganda, we’ve found that Faculty’s technology can also spot consistent signals within supporter media and use these to build detection models.
Faculty has developed AI software that can identify the supporter-created propaganda videos, imagery, audio and text that are a key part of terrorist groups’ efforts to radicalise, recruit and inspire acts of terrorism in the UK and abroad. Trained on thousands of pieces of terrorist content, Faculty’s propaganda detection technology is able to correctly identify and analyse 94% of propaganda with a false positive of one in 20,000.
Our award-winning technology is now operationally deployed by governments and law enforcement teams around the world and we’ve expanded our focus to develop AI products in a broader range of national security and online harms focus areas. Our technology suite now spans a wide range of use cases:
- Leading deepfake and manipulated imagery classification systems that have outperformed academic benchmarks, to
- New and innovative approaches to signal detection within CSEA content
- The development of data access and model training environments that are ultra secure and closely protect privacy
Staffed by a team of counter terrorism specialists and AI experts, Faculty’s team is built to help agencies change the face of national security with machine learning. We help agencies tackle harmful and illegal content and behaviours on larger scales, with greater accuracy, and over the long term.
If you’d like to find out more about the work described here, visit our website.
Want to discuss your own challenges in the sector? We’ll be exhibiting at CTX from 14-16th September. Email us today to set up a meeting.