Feature

Cyber Terrorism

Terrorists and the Web 2.0

Joe Whittaker and Amy Louise Watkin, from Swansea University, discuss online radicalisation and regulation of online propaganda from a range of social media platforms

Since its inception, the Internet has tended to play a varied, yet prominent role in the trajectories of those becoming terrorists. Although it is important not to overplay this relationship – research has found that despite most cases of terrorism in the West having a digital footprint, offline factors usually play an equal or greater role [1]. However, much of this research focuses on the ‘Web 1.0’ period, before social media, personalisation algorithms, and smart devices began to dominate our lives.

By no coincidence, the ‘Web 2.0’ era coincided with the most successful and widespread terror recruitment drives in history; that of the so-called Islamic State (IS). Both practitioners and policymakers were caught unaware by a group they did not fully understand, effectively using technologies that are designed to implement fast and inexpensive communication across the globe. In the early and mid-2010s, many terrorist groups had rule of the roost on social media platforms. For example, research in 2012 found that Al-Qaeda had shifted their propaganda from closed online fora to YouTube, where they were forming highly connected social networks [2]. Towards the end of 2014, it was estimated that there was between 46,000-70,000 active IS supporters on Twitter [3]. Research observing foreign fighter activity found that those who had travelled to join IS and Jabhat al Nusra in Syria and Iraq were able to effectively communicate back to home audiences via Facebook and Twitter, often with the help of  social media ‘disseminators’ [4].

The unregulated nature of these platforms made a considerable difference in the trajectories of terrorists. A group of individuals within IS territory dubbed the ‘Virtual Entrepreneurs’ helped facilitate a number of terror attacks on US soil and beyond as well as travel planning for those travelling to the caliphate [5]. In most of these cases, would be terrorists initially came across Virtual Entrepreneurs via open social media platforms, often Twitter. Take, for example, Keonna Thomas, who sought to leave her two young children to join IS in April 2015. For around two years before, she openly proselytised on Twitter, attracting the attention of Virtual Entrepreneurs Mohamed Abdullahi Hassan and Shawn Parson (the latter of whom she married over Skype), as well as Trevor Forrest, also known as Abdullah Faisal. Of course, the facilitation of foreign fighters is not new to the Internet, but unregulated social media can act as an amplifying beacon for those who wish to announce their presence within a terrorist community.

Given the number of those travelling to Iraq and Syria, as well as the host of terrorist attacks around the world, it would be unreasonable not to expect a response, although this came with a series of technological problems to solve. In early 2016, giant tech firms, such as Facebook, Twitter and YouTube, began to respond to the increased pressure to regulate through blog posts addressing their counter-terrorism efforts. A blog posted to Facebook’s Newsroom on 23 May 2017 by Monika Bickert, their Head of Global Policy, revealed many of the problems that they face when regulating terrorist and other difficult content [6]. The first is the billions of users that visit the platform every day and the variety of posts they create in many different languages and formats (e.g. text, photos, videos, livestreams etc.). The next is that as well as regulating terrorist content, they simultaneously have to regulate content including but not limited to bullying, suicide, sexual exploitation, hate speech and animal cruelty. Then, when reviewing content they must quickly and accurately identify, firstly the context of the post; and secondly whether it is likely to inspire violence. Furthermore, certain types of content are illegal in some countries but not in others which can quickly become complicated. In a follow up blog, Bickert discussed that much of the technology used to regulate can only be applied to one specific form of media, for example, it can accurately remove images but not videos or text [7]. Therefore, regulating is difficult for a range of reasons and a variety of removal technology is required.

Removing terrorist content
Despite these problems, the companies have created methods that work to either remove the content or account, prevent content from upload, or divert users elsewhere. This article will review these methods for Facebook, YouTube and Twitter. Facebook uses AI in several ways: image matching (i.e. if someone tries to upload a photo that matches a photo that has previously been identified as terrorist-related then this is disrupted); language understanding (i.e. this analyses text that been removed for being terrorist-related in order to develop text-based signals that algorithms can use to detect similar text in the future); removing terrorist clusters (i.e. upon detection of a page or profile that has participated in terrorist activity, algorithms can be applied to work outwards to detect any linked pages/profiles for similar activity); and tackling recidivism (i.e. detecting new accounts of repeat offenders) [8]. Google-owned YouTube uses similar image-matching and content-based technology, as well as the Redirect Method, which redirects users upon the entering of specific keywords to videos that will expose them to alternative views that attempt to counter the content for which they were searching [9]. Finally, Twitter uses anti-spam tools to identify and remove terrorist content [10] as well as algorithms to find similar terrorist accounts to ones that they have already identified as supporting terrorism [11]. All three companies also utilise human moderators (because AI isn’t perfect) and allow users to flag content [12]. YouTube provide tools for those who are exceptionally interested in and highly accurate at reporting content through their Trusted Flagger Program [13].

Many academics claim that the giant tech companies have been so successful in their removals that displacement has occurred to darker parts of the web such as end-to-end encrypted apps like Telegram or WhatsApp [14]. This presents a new challenge to law enforcement, who can subpoena platforms such as Facebook and Twitter if they have probable cause, but cannot for apps that are end-to-end encrypted. Instead, security services tend to rely on ‘old-fashioned’ undercover agents that infiltrate terror cells and gain access to groups on encrypted platforms. For example, the case of Sean Andrew Duncan, who discussed his plan to travel to IS with a number of co-conspirators, on a number of encrypted apps, before the FBI posed as a co-conspirator on one of the apps, which ultimately aided in his arrest. Terrorism cases have, in other ways, adapted to face this environment of higher regulation. Take Zoobia Shahnaz, who is accused of fraudulently using a credit card to purchase over $60,000 in Bitcoin to fund IS. It is worth noting that cases of terrorist funding via cryptocurrencies are rare, but may represent a more mainstream terrorism law enforcement challenge in the future.

Although encrypted methods of communication and planning have become extremely popular with terrorists (due to its ideal private and secure nature which is more difficult to regulate), there is evidence to suggest that the giant tech firms are still home to many terrorist accounts. Research in 2018 discovered over 1,000 active IS-supporting Facebook profiles and pages in early 2018 and found that 57 per cent of them were still active and producing propaganda on the 5 March 2018 [15]. Similarly, research by Vox-Pol found that jihadist groups such as the Taliban and al-Shabaab are not facing nearly the same level of disruption on Twitter as IS [16]. One should be careful not to overstate these findings though, it is significantly more difficult to engage in terrorists’ social networks on mainstream social media platforms today for the reasons discussed above. However, terrorist actors have shown themselves to be more than capable of adapting to new technological challenges in the past [17], and whether it is a greater reliance on end-to-end encrypted software, attacks that can be planned without a large digital footprint, or something new entirely, the next generation of terror attacks will throw up a new set of challenges for those that seek to oppose it.

References

1 - Gill & Corner, 2015; Gill et al., 2017; Hussain & Saltman, 2014; von Behr, Reding, Edwards, & Gribbon, 2013

2 - Klausen, Barbieri, Reichlin-melnick, & Zelin, 2012

3 - Berger & Morgan, 2015

4 - Carter, Maher, & Neumann, 2014; Klausen, 2015

5 - Hughes & Meleagrou-hitchens, 2017

6 - Bickert, 2017b

7 - Bickert, 2017c

8 - Bickert, 2017a

9 - YouTube, 2017

10 - Stone, 2008; Twitter 2010; Twitter 2012

11 - Twitter, 2016

12 - Bickert, 2017; Walker, 2017; Twitter, 2016

13 - YouTube, 2016

14 - Prucha (2016); Malik (2018); Bloom, Tiflati & Horgan, (2017); Berger & Perez (2016); Conway et al. (2017)

15 - Waters & Postings, 2018

16 - Conway et al., 2017

17 - Watkin & Whittaker, 2017

Bibliography

Bickert, M. (2017a) Hard Questions: How We Counter Terrorism. Facebook. Accessed July 2018 via https://newsroom.fb.com/news/2017/06/how-we-counter-terrorism/

Bickert, M. (2017b) Facebook’s Community Standards: How and Where We Draw the Line. Facebook. Accessed July 2018 via https://newsroom.fb.com/news/2017/05/facebooks-community-standards-how-and-where-we-draw-the-line/

Bickert, M. (2017c) Hard Questions: Are we winning the war on terrorism online? Facebook News Room. Accessed July 2018 via https://newsroom.fb.com/news/2017/11/hard-questions-are-we-winning-the-war-on-terrorism-online/

Berger, J. M., & Morgan, J. (2015). The ISIS Twitter Census: Defining and Describing the Population of ISIS Supporters on Twitter. The Brookings Project on U.S. Relations with the Islamic World: Analysis Paper, March(20), 68.

Berger, J. M., & Perez, H. (2016). The Islamic State's Diminishing Returns on Twitter: How Suspensions are Limiting the Social Networks of English-speaking ISIS Supporters. George Washington University

Bloom, M., Tiflati, H., & Horgan, J. (2017). Navigating ISIS’s Preferred Platform: Telegram1. Terrorism and Political Violence, 1-13

Carter, J. A, Maher, S., & Neumann, P. (2014). Measuring Importance and Influence in Syrian Foreign Fighter Networks. The International Centre for the Study of Radicalisation and Political Violence, 1–36.

Conway, M., Khawaja, M., Lakhani, S., Reffin, J., Robertson, A., & Weir, D. (2017). Disrupting Daesh: Measuring Takedown of Online Terrorist Material and Its Impacts. Dublin.

Conway, M., Khawaja, M., Lakhani, S., Reffin, J., Robertson, A., & Weir, D. (2017). Disrupting Daesh: Measuring Takedown of Online Terrorist Material and Its Impacts, p1-45.

Chowdhury, A. (2010). State of Twitter Spam. Twitter Blog. Accessed July 2018 via https://blog.twitter.com/official/en_us/a/2010/state-of-twitter-spam.html

Gill, P., & Corner, E. (2015). Lone Actor Terrorist Use of the Internet and Behavioural Correlates. In L. Jarvis, S. Macdonald, & T. M. Chen (Eds.), Terrorism Online: Politics Law and Technology (pp. 35–53). Abingdon, Oxon: Routledge.

Gill, P., Corner, E., Conway, M., Thornton, A., Bloom, M., & Horgan, J. (2017). Terrorist Use of the Internet by the Numbers. Criminology & Public Policy, 16(1), 1–19. https://doi.org/10.1111/1745-9133.12249

Hughes, S., & Meleagrou-hitchens, A. (2017). The Threat to the United States from the Islamic State’s Virtual Entrepreneurs. CTC Sentinel, 10(3), 1–9.

Hussain, G., & Saltman, E. M. (2014). Jihad Trending: A Comprehensive Analysis of Online Extremism and How to Counter it, Quilliam Foundation.

Klausen, J. (2015). Tweeting the Jihad: Social Media Networks of Western Foreign Fighters in Syria and Iraq. Studies in Conflict & Terrorism, 38(1), 1–22.

Klausen, J., Barbieri, E. T., Reichlin-melnick, A., & Zelin, A. Y. (2012). The YouTube Jihadists: A social network analysis of Al- Muhajiroun’s propaganda campaign. Perspectives on Terrorism, 6(1), 36–53.

Malik, N (2018) Terror in the Dark: How Terrorists Use Encryption, the Darknet, and Cryptocurrencies. London: The Henry Jackson Society

Prucha, N. (2016). IS and the Jihadist Information Highway–Projecting Influence and Religious Identity via Telegram. Perspectives on Terrorism, 10(6)

Stone, B. (2008) Turning up the heat on spam. Twitter Blog. Accessed July 2018 via https://blog.twitter.com/official/en_us/a/2008/turning-up-the-heat-on-spam.html

Twitter (2016). Combatting violent extremism. Twitter Blog. Accessed July 2018 via https://blog.twitter.com/official/en_us/a/2016/combating-violent-extremism.html

Twitter (2012). Shutting down spammers. Twitter Blog. Accessed July 2018 via https://blog.twitter.com/official/en_us/a/2012/shutting-down-spammers.html

von Behr, I., Reding, A., Edwards, C., & Gribbon, L. (2013). Radicalisation in the Digital Era: The use of the internet in 15 cases of terrorism and extremism. Rand.

Walker, K. (2017). Four Steps we’re taking today to fight terrorism online. Google. Accessed July 2018 via https://www.blog.google/topics/google-europe/four-steps-were-taking-today-fight-online-terror/

Waters, G., & Postings, R. (2018). Spiders of the Caliphate: Mapping the Islamic State’s Global Support Network on Facebook.

Watkin, A., and Whittaker, J. (2017) Evolution of terrorists’ use of the internet. Counter Terror Business Magazine. Accessed July 2018 via http://www.counterterrorbusiness.com/features/evolution-terrorists%E2%80%99-use-internet

YouTube (2016). Growing our Trusted Flagger program into YouTube Heroes. YouTube. Accessed July 2018 via https://youtube.googleblog.com/2016/09/growing-our-trusted-flagger-program.html

YouTube, (2017). Bringing new Redirect Method features to YouTube. YouTube. Accessed July 2018 via https://youtube.googleblog.com/2017/07/bringing-new-redirect-method-features.html

Partners

View the latest
digital issue