Terrorist Use of Generative AI 

Contextualising the threat and establishing frameworks to prevent the terrorist and violent extremist exploitation of generative AI. 


Generative AI is at risk of terrorist exploitation, and its rapid rise has ushered in urgent policy debates about abuse by hostile actors.

As terrorist and violent extremists begin to experiment with the new technology, we are working to place the threat in context and examine both the dangers and opportunities of generative AI with relation to terrorist use of the internet.  

The risk in context

Terrorists and violent extremists have long adapted to new technologies and made full use of each new wave of the internet.  

Starting with the exploitation of websites and forums, the threat evolved to its most acute form on social media, compelling tech platforms to develop sophisticated measures for content moderation. 

As a result, terrorist and violent extremists have adapted their methods and continue to rely heavily on websites to store propaganda, amplify their content, and ultimately to recruit and fundraise. Websites are often an overlooked part of the tech ecosystem used by hostile actors. 

In our view, the threat of the exploitation of generative AI should be seen in this context. Each new AI tool coming to the market presents an opportunity for terrorists and violent extremists to adapt.  

The scale of the problem now

Analysts at Tech Against Terrorism are tracking the early adoption of generative AI, which we assess is chiefly being employed to augment and speed up the human content creation process. Hostile actors are also producing guides on using Large Language Models.

Our first analysis examined 5,000 pieces of AI-generated content shared in terrorist and violent extremist spaces. Whilst this represents a fraction of this kind of material available on the internet, it does point to the challenge faced by content moderators.   

See also: 
Here’s How Violent Extremists Are Exploiting Generative AI Tools, Wired, November 2023

How generative AI can be exploited

media spawning
Media Spawning​

Starting with a single image or video, a TVE actor could generate thousands of manipulated variants capable of circumventing hash-matching and automated detection mechanisms.

multilingual translation
Automated multilingual translation

Following publication, TVE actors could translate text-based propaganda into multiple languages which would overwhelm linguistic detection mechanisms operated manually.

fully synthetic prop
Fully synthetic propaganda

TVE actors could generate completely artificial TVE content. This could include speeches, images, and even interactive environments, and could overwhelm ongoing moderation efforts.

Variant recycling

TVE actors could repurpose old propaganda using generative AI tools to create “new” versions which would evade mechanisms for the hash-based detection of the original propaganda content.

personalised propoganda
Personalised Propaganda

TVE actors could use AI tools to customise messaging and media to scale up the targeted recruitment of specific demographics.

subverting moderation
Subverting moderation

TVE actors could leverage AI tools to design variants of propaganda specifically engineered to bypass existing moderation techniques.

Challenge to content moderators

The sheer volume of LLM-enabled and -edited content could evade and defeat the existing content moderation technologies, such as hashing, which have built over the years by tech companies. ​

As the technology develops, policymakers and analysts are highlighting concerns about LLM-enabled cyber attacks, as well as the potential use of effective chatbots for terrorist recruitment. 

The opportunities of generative AI 

Despite the risks of adversarial exploitation, Generative AI also offers an opportunity to radically augment well-developed content moderation systems. Taking lessons from previous iterations of the internet, it is now crucial for tech platforms to combine efforts to mitigate the risk.  

As part of its mission to disrupt terrorists online, Tech Against Terrorism will be forging cooperation to ensure governments and the tech sector alike stay ahead of this new development in the threat. Overall, we propose: 

Our approach

Threat intelligence

Stay alert to and subvert all adversarial approaches used by terrorists, including those leveraging generative AI. Ensure existing content moderation technologies such as perceptual hashing and URL sharing are not evaded. 

Lean into generative AI

Use generative AI to provide a revolutionary content moderation technology, particularly for platforms that cannot afford to develop or maintain bespoke technical approaches. 

Develop generative AI

Understand the extent to which these technologies can detect terrorist content based on semantic understanding of content. 

Contact us

If you would like to discuss our work on generative AI, please contact us.