3 min read

Twitter at Risk of Threatening National Security as Trust and Safety Measures are Dismantled

Changes to content moderation standards and process risks exposing platform to terrorist and violent extremist actors.

KEY POINTS

  • With the dismantling of its Trust and Safety Council, key personnel gone, and standards loosened, Twitter needs to clarify where it stands on countering terrorist use of the internet.
  • Elon Musk’s declaration that hate tweets will be ‘demoted and demonetised’ rather than removed leaves the platform ripe for the delivery and deployment of terrorist content and operations.
  • Some violent extremists, previously banned, are already expressing hope to be back on the platform.
  • Tech Against Terrorism, an independent UN initiated public-private partnership, warns that upheaval at Twitter and across the tech sector could have a detrimental effect to content moderation and could thwart collective, long-standing efforts to uphold content moderation standards online.

Thursday 15 December 2022: Today Tech Against Terrorism warns that Twitter is in danger of further exposing its platform to terrorist exploitation following the dismantling of the company’s Trust and Safety Council, the firing of key experts and ongoing changes to standards.

This is reflected by Twitter’s new owner proposing the re-instatement of previously banned accounts from people and entities accused of promoting hatred and violence.

Justifying the decision Elon Musk, the new owner of Twitter, announced that the company's new policy is 'freedom of speech, not freedom of reach'. Twitter promises that hate tweets and negative tweets will be demoted and demonetised, which Mr Musk claims is 'no different from the rest of the internet'.

Adam Hadley, Executive Director of Tech Against Terrorism said:

“Mr Musk is playing with fire by threatening to dismantle policies and processes developed to counter terrorist and violent content and operations. Some of this flies in the face of common sense: people and entities previously banned for hate speech and extremism should remain banned in the future rather than being given a free pass to share abusive, offensive, and possibly illegal content.”

“Elon Musk says hate tweets will be ‘demoted and demonetised’. In fact, this pronouncement leaves the platform ripe for the potential of sharing terrorist and facilitating terrorist operations."

"Harmful content may be harder to find, but it will remain on Twitter. Information to promote or facilitate terrorist or violent extremist activity, in addition to child sexual abuse material will be available on the platform. Elon Musk’s latest announcement reveals a fundamental misunderstanding of content moderation on the part of its new owner. We saw this with the summary dismissal of the company’s Brussels office, a crucial node for Twitter as it engages with the EU’s complex regulatory regime.”

“Elon Musk says hate tweets will be ‘demoted and demonetised’. In fact, this pronouncement leaves the platform ripe for the potential of sharing terrorist and facilitating terrorist operations."

Adam Hadley, Executive Director, Tech Against Terrorism

Twitter Reduces Content Moderation Capacity

Tech Against Terrorism is raising concerns following the disbandment of its Trust and Safety Council and a personnel cut in the company’s Public Policy and Trust and Safety teams. Content moderation teams within tech companies, with support from law enforcement agencies, have an unenviable responsibility in upholding both freedom of speech and online safety.

Adam Hadley, Executive Director of Tech Against Terrorism said:

“Elon Musk says that Twitter’s new policy is ‘freedom of speech, not freedom of reach’. I urge Mr Musk to urgently clarify the measures taken to enforce freedom of reach, especially as anti-terrorist moderation teams appear to have left the company. We recognise that moderation of terrorist content is a significant challenge even for the most sophisticated tech platforms. We are concerned that any real or perceived erosion of content moderation capability on the larger social media platforms threatens reverse years of progress and could further embolden terrorists and violent extremists to return.

Experts in content moderation have built decades of vital experience and have shown leadership as terrorists continuously attempt to exploit loopholes in tech platforms and so we are concerned this capability could take a significant amount of time to rebuild once lost.”

Signal to Terrorists And Violent Extremists

“We are concerned that changes to content moderation at Twitter will send a troubling signal to terrorists and violent extremists seeking to exploit the platform. In our routine monitoring at Tech Against Terrorism, some violent extremists, once banned, are now expressing plans to be back. Twitter has spent the last five years striving to thwart terrorist and violent extremist content. While Twitter has said that ‘core moderation capabilities’ will be maintained, we urgently need to know where they stand in relation to the moderation of terrorist and violent extremist content.”

Risk of Tech Companies Reversing Gains in Online Terrorist Content Moderation

Twitter has emerged as an important part of global efforts in countering online terrorist content. However, Adam Hadley now warns of the impact the personnel reduction would have to the rest of the industry:

“Where Twitter leads, others may follow. We are concerned that bigger tech companies will be tempted to reduce their counter-terrorism capacity. All platforms should uphold high standards in terms of content moderation, and in particular, the terrorist use of content on their platforms. This requires constant vigilance to ensure tech platforms – big and small – are following their obligations, under law and under their terms of service.”

In raising this concern, Tech Against Terrorism calls on key bodies such as the United Nations and the Independent Advisory Committee of the Global Internet Forum to Counter Terrorism (GIFCT) to hold bigger tech platforms to account.