Feature

Cyber Terrorism

Online Communications, Real Life Consequences

The advent of social media has been a boon for global terror. Terrorist groups use social media in myriad ways, from fundraising, radicalisation, and recruitment, to issuing threats, inciting violence, and planning attacks. Indeed, extremists’ rapid adoption of major tech platforms has been critical to the organisation, expansion, and success of terrorist networks.

The results, as we all know, have been devastating. In 2019, far-right gunmen in Christchurch, New Zealand and El Paso, Texas killed 74 people between them. Both had been radicalised in online extremist echo-chambers where their attacks were subsequently celebrated by others. While the global pandemic provided scant opportunity for atrocities on this scale in the years that followed, terrorism officials across Europe and America have repeatedly warned that they are likely to resume as society begins to reopen.

Legislative progress
In the face of this coming crisis, there is at least some positive news to report. The first half of 2021 has been an historically busy period for online counterterrorism legislation. In May, the UK Government unveiled the first draft of its Online Safety Bill, a landmark item of legislation that will allow the Office of Communications to fine major platforms up to £18 million for failing to remove harmful content.

Not to be outdone, the French government recently adopted a new counterterrorism and intelligence bill allowing for greater surveillance of extremist online networks through the monitoring of algorithms, while the German parliament has approved an amendment strengthening the country’s Network Enforcement Act (NetzDG) by giving researchers regulated access to social media data.

At a continental level, April saw the long-awaited adoption of the EU’s Regulation on the Dissemination of Terrorist Content Online (TCO), a sweeping package of laws that will require online platforms to remove content deemed terrorist in nature, within one hour of notification. The TCO will work in tandem with the upcoming Digital Services Act (DSA) to protect citizens from online hate speech and extremist propaganda.

Across the Atlantic, the US Justice Department has taken a similarly positive step by proposing to roll back tech companies’ legal protections under Section 230 of the Communications Decency Act, thus allowing platforms to be held legally accountable for negligence.

Big Tech’s obscurantism
With the people, the politicians and the experts all on the same page when it comes to rooting out extremists online, only one group remains as obdurate as ever. For years, Big Tech and its lobbyists have relentlessly sought to obscure the conversation around online extremism. To this day, they continue to perpetuate groundless claims meant to shift the burden of responsibility for promoting extremist and terrorist content to anyone but themselves.

For example, tech companies routinely claim that mainstream sites have improved their content detection and removal capabilities to the point where most extremist activity takes place on smaller platforms. This is demonstrably untrue. Far from big tech platforms leading the way in content moderation, a study of media responsibility published in February of this year found that Facebook, Twitter and YouTube are being significantly outpaced by smaller platforms in their efforts to eliminate harmful posts.

Recently, Facebook has been forced to acknowledge that its Groups feature is plagued by ‘enthusiastic calls for violence every day’ and is now being overhauled, while ISIS continues to elude the site’s detection simply by superimposing Netflix logos on top of its propaganda videos. YouTube, meanwhile has refused to take down content from the Rise Above Movement (RAM), a violent white supremacist group. RAM has also used Instagram to promote a clothing line.

When pressed with these facts, tech lobbyists have sought to justify industry complacency by asserting that content removal itself might radicalise users. This absurd excuse disingenuously conflates a user’s feelings of inconvenience with the complex psychological process of extremist radicalisation. The flimsiness of the excuse is also exposed every time a new PR crisis forces them to make lurching overreaches in their content moderation, to compensate for their failure to attack the problem in a systemic, proactive fashion.

As we move into the second half of the year, Big Tech can have no more excuses for failing to root out the extremists they have so long played host to.

A program for action
After years of negligence, tech companies have finally been forced into action around online extremism. Under the new legislative regimes we are seeing emerge, major social media platforms will be made to do more in the fight against extremism whether they like it or not.

Nonetheless, Big Tech still has a lot of power. It is they who hold the technical expertise regarding how best to alter and reform their own internal algorithms and content surveillance systems. If they choose to continue battling legislators tooth and nail, doing the minimum possible to satisfy their legal obligations while fundamentally disregarding the spirit of the law, a lot of innocent lives could yet be lost in future terror attacks.  

With that in mind, there are a number of best practices that Big Tech must adopt, whether legally obliged to or not, if they want to get serious about cleaning up their act.

Firstly, it should be clear by now that automated monitoring tools alone are not up to the job of policing major platforms. Human researchers and content moderators must be included in the decision-making process and be on the lookout for emerging trends. Technology companies should ensure that these departments are fully staffed, and appropriately prepare, train, and educate their human reviewers.

Secondly, platforms must be proactive in content monitoring. Many social media sites primarily review and remove content that has been reported to them. Instead, given the vast financial resources these companies have at their disposal, each should spearhead internal efforts to find terrorist content and remove it without relying on the public, or even legislators, to police the platform for them.

Thirdly, while removing content quickly from Internet and social media sites is clearly an important component of any effort to restrict the dissemination of terrorist propaganda, 'time online' should not be the only metric used to gauge tech’s progress in combating terrorist propaganda. CEP has found that in many cases, extremist videos removed within two hours still received dozens and, in some cases, hundreds of views.

If big tech platforms are willing to cooperate with legislators and work within the new legal regimes that have been established to regulate them, the decade to come may yet see them partially restore the reputations they have so badly damaged through their recalcitrance to date.

Written by David Ibsen, executive director of the Counter Extremism Project (CEP).

CEP is a not-for-profit, non-partisan, international policy organisation formed to combat the growing threat from extremist ideologies. Led by a renowned group of former world leaders and diplomats it combats extremism by pressuring financial and material support networks; countering the narrative of extremists and their online recruitment; and advocating for smart laws, policies, and regulations.

Partners

View the latest
digital issue