Feature

Cyber Terrorism

Terrorist Use of the Internet

Terrorists and violent extremists use a variety of interlocking technologies to share content and plan operations. Tech Against Terrorism (TAT) recognises four categories of tech platforms that are used to ensure content longevity: beacons, content stores, aggregators and circumventors. This diverse eco-system is exemplified by both viral and more routine propaganda operations. According to TAT’s research, terrorists use a vast number of smaller platforms to disseminate terrorist propaganda, ranging from messaging apps to video streaming and file sharing platforms. A seven-day study conducted by TAT found over 800 unique IS-related ‘outlinks’ on a messaging app leading to more than 90 different platforms.

Over the past four years, we have found evidence of terrorists uploading terrorist material to more than 350 different platforms. The use of small platforms by malevolent actors has increased in recent years, in part due to bigger tech platforms’ improved capability to remove terrorist content often leading to such groups migrating elsewhere. This, coupled with the fact that small platforms struggle the most in terms of lacking resources to tackle such exploitation and moderate content on their platforms, creates serious challenges for such platforms to for them to act swiftly and accurately. For example, a platform run by just one person might be overwhelmed by having to accurately identify hundreds of ISIS videos whilst ensuring that takedown does not result in restriction on freedom of speech. This risk is serious: undermoderating content enables terrorists to radicalise, recruit, and plan operations; overmoderating content can risk infringing upon freedom of expression online.

This dilemma is exacerbated by the sheer volume of terrorist content online. Terrorists use a range of freely available tools to ensure content longevity online – such as archiving tools, URL generators, and file mirroring services. Terrorist groups are also thinking through operational security, as exemplified by IS’ Electronic Horizons Foundation, which advises supporters on how to stay hidden online. Increasingly, we are also seeing terrorist groups building and maintaining their own websites archiving vast amounts of propaganda, mitigation of which presents a myriad of legal and operational challenges. Further, there are concerns about the role played by so-called ‘alt-tech’ sites, platforms created to allow for lax content moderation rules, due to fears that such platforms could create fertile ground for particularly far-right violent extremist and terrorist groups. This makes it an ever-evolving threat landscape, where the dissemination of terrorist content fluctuates and survives in different forms and locations. In addition to the proliferation of terrorist content and the general complexity of terrorist use of the internet, small platforms lack resources to moderate terrorist content and tackle the exploitation of their platforms.

Content moderation challenges
Tech platforms face numerous challenges when tackling terrorist exploitation. The context, definition, sheer volume, and government pressures met with a lack of clear guidance, create obstacles for platforms engaging in moderation of terrorist content. Though the difficulties of terrorist content moderation apply to all tech platforms, small platforms face these issues to a greater extent, as they lack capacity, resources, and guidance on not only understanding the threat of terrorist or violent extremist use of their platforms, but also the essential best practice policies or operational support to effectively moderate content and tackle the exploitation.

Terrorist propaganda comes in many forms and each piece of content contains nuances and depends heavily on context. Though some terrorist content depicts violence which would be prohibited under many platforms’ terms of service, other forms of propaganda can go undetected as they do not directly depict violence. An IS propaganda video detailing the caliphate’s medical services is an example of nonviolent content that is still heavily propagandistic. Even larger platforms like Facebook struggle with contextualisation. This is seen with Facebook’s Oversight Board, whose first cases have highlighted some of the complex decisions relating to context that Facebook’s moderators regularly face.

The issue of defining terrorist content relates to a much larger, international dilemma; there is no internationally recognised definition of terrorism. In turn, there is no consensus on the definition of online terrorist content. Thus, governments have domestic laws and tech companies are left to navigate the discrepancies between laws and adjudicate on them, in order to influence their own content standards. In fact, company policies often prohibit speech that is legal in the countries they operate in. This is sometimes necessary in order to take action on terrorist content. For example, since many states have not designated any far-right terrorist groups, tech companies would in such instances need to rely on their own policies as a basis for action. Therefore, governments should increase accurate and responsible designation of terrorist groups as a way to clarify what content they expect tech companies should take action on.

Platforms face a wide range of regulations imposed by each country globally. In TAT’s recent Online Regulation Series, we noted that many governments are imposing disproportionate regulations on tech platforms’ content moderation actions. Though it is important to create frameworks which regulate illegal online content, smaller platforms struggle with regulations that are under a ‘one-size-fits-all’ approach, which apply more easily to companies that are larger and have more resources. Small platforms suffer from seemingly impossible timeframes such as 24-hour, or even one-hour (as proposed by the EU) deadlines to remove content flagged by authorities. Within such a limited window, small teams cannot ensure that they are following rule of law and properly justify the removal of a given content. Companies who fail to meet those deadlines, only when enshrined into law, suffer penalties. A concern from civil society is that companies under such pressure would err on the side of over-removal of content, which could result in legal content wrongfully being taken down, ultimately infringing on freedom of expression. Such regulations could hurt smaller tech platforms and their users, as well as hinder technological innovation.

Similarly, calls from lawmakers for mandatory transparency reporting for tech platforms also risk ignoring the difficult practical challenges for smaller tech platforms in producing such reports.

Tech platforms thus have to balance the task of moderating terrorist content and all of its complexities and challenges with pressure from governments and regulation, which all makes effectively and accurately responding to terrorist use of the internet incredibly difficult.

Tech Against Terrorism
Tech Against Terrorism works with such smaller platforms to support them with their internal processes and policies to better combat terrorist use of their services, while ensuring users’ rights. We work with all types of tech companies, such as social media, pasting, file-storage, messaging, fintech platforms, and web infrastructure providers. In 2019, we have had direct engagement with more than 80 tech companies through our workshops, webinars, partnerships, and bilateral discussions. In doing so, we provide support through a wide range of workstreams.

The first is threat analysis and outreach, where we carry out extensive open-source intelligence analysis to identify platforms at risk and build constructive working relationships with the tech sector. The second is through knowledge sharing and best practices wherein which we facilitate intra-industry and cross-sector support mechanisms through online tools, guides, and practical datasets to support policy and content moderation decisions.

We support companies through our membership and mentorship programmes, providing best practice on content standards, human rights compliance, transparency, and content moderation. As platforms establish terms on terrorist content, having accurate yet robust content standards is vital to effectively tackling terrorist use of a service. In the mentorship process, TAT completes a forensic review of a company’s content standards, including their Terms of Service, Community Guidelines, and other policies, providing policy recommendations for areas that might need consideration. TAT also provides guidance to platforms to help them ensure that their policies and processes are created and enforced to preserve and enhance users’ rights. Companies are asked to sign the TAT Pledge, which provides a framework to ensure that companies will actively consider freedom of expression and human rights in their counter-terrorism measures. Finally, TAT supports companies to either introduce or improve on their transparency reporting processes, providing policy advice around standards to report on and, if necessary, support in data collection processes.

TAT additionally provides technical support and resources for tech companies to improve their counter terrorism mechanisms, for example through data science or development support. An example of this is TAT’s current work on the Terrorist Content Analytics Platform (TCAP), which is a secure online platform that hosts terrorist material including verified terrorist content collected from open sources and existing datasets to facilitate secure information sharing between tech platforms, academia and data scientists. The TCAP alerts platforms to branded content associated with designated (far-right and Islamist) terrorist organisations. These alerts aim to provide relevant context about the content as well as groups and their designation status. This empowers tech companies to swiftly make independent and informed decisions about content removal on their platforms, which is meant to offset some of the main challenges listed earlier for small platforms.

It is through this multi-pronged approach that Tech Against Terrorism aims to facilitate smaller tech platforms in providing necessary external resources, as well as improving their internal processes and policies, to better tackle terrorist exploitation of their services, while upholding and enhancing their users’ rights.

Written by Tech Against Terrorism.

Partners

View the latest
digital issue