Regulating terrorist content online: considerations and trade-offs
Amy-Louise Watkin and Joe Whittaker look at the interface between terrorism and the internet, and whether big tech companies should be doing more to to tackle online propaganda and the rise in digital radicalisation
At first glance, the removal of terrorist content online seems like an intuitive goal. In the heyday of the Islamic State’s virtual caliphate, the group was able to spread their message far and wide. Scholars found that online platforms were being used to spread propagandai, recruit potential terroristsii, and disseminate instructional materialiii. Far right terrorists, too, have abused platforms by disseminating content. The world watched in horror as the Christchurch attack was live-streamed on Facebook, and multiple attackers have posted their manifestos onlineiv. Although different countries have different freedom of speech norms enshrined in law, there has been a widespread move towards removing terror content from the Internet, including the Christchurch Callv, the UK Online Harms White Papervi, and the German NetzDG lawvii.
Although they are sometimes maligned for not acting quickly enough, the big tech companies have, by and large, adopted policies which help to remove terror content from their platforms, as well as working together to share best practices, like in the case of the Global Internet Forum to Counter Terrorismviii and Tech Against Terrorismix. There have been considerable successes to this approach; many have documented the stark difference between the reach of groups like ISIS before tech companies took a more robust approach to content removal compared to afterx. It is clear that the degradation of the group’s online presence was, at least in part, caused by tech companies taking a proactive approach, however, the issue contains many complexities. Below we offer a number of considerations for the removal of content, including trade-offs, regulatory frameworks, and the capabilities and motivations of social media platforms.
The online extremist’s dilemma
Although the removal of terrorist content has clear benefits, there may be significant trade-offs too. The move of Islamist groups like ISIS away from the larger platforms towards smaller and more secure ones, particularly Telegram, has been well documentedxi. The unique selling point of these platforms is their operational security. As such, turning terrorist actors away from mainstream platforms may inadvertently be helping them. Research has suggested that terrorists’ use of the Internet (including using social media to communicate with co-ideologues, accessing ideological content, and using the web to plan their attack) makes them less likely to succeed, while finding that the use of end-to-end encrypted platforms does not affect successxii. Similarly, it was also found that terrorists that use social media are more likely to be apprehended before their event than those who did notxiii. This is what Clifford and Powell call the online extremist’s dilemma; the goals of outreach for recruitment and operational security are fundamentally at oddsxiv. There are clear and substantial benefits to removing terrorist content, but we must be honest about the trade-offs; terrorists have less opportunity to telegraph their actions and ideology to law enforcement.
In addition to the complex trade-offs of content removal, the creation of regulation requires some further considerationsxv. One is the variety of services that terrorists and extremists are using and the finding that their use is interconnected yet not homogeneous. Weirman and Alexanderxvi revealed that ISIS have adapted to the policies of the major platforms. For example, instead of posting clearly violating content on Twitter, they post non-violating news sources that validate the group’s stance. This research along with other studies have shown that the major platforms are also being used to post URL links that redirect followers to a range of file-sharing sites (which tend to be less censored), where large quantities of content (that would violate major platforms policies) are postedxvii.
Further to this, Clifford and Powell found that ISIS also use the major platforms to redirect to the previously mentioned online instant messaging service Telegram where followers can communicate with one another both in public channels and private group chats, post media and instructional guidesxviii. Therefore, regulation needs to consider that there are a range of social media platforms, file-sharing sites and instant messenger services being used with Tech Against Terrorism finding evidence of terrorist groups using more than 330 platforms with half of the top 50 most-used platforms by ISIS being small or micro-platformsxix. Regulation also needs to recognise the different ways the platforms are used with some used to post currently non-violating content, others to signpost to other sites, and some as repositories or for communication.