Tech Against Terrorism has archived over 5,000 pieces of AI-generated content shared in terrorist and violent extremist spaces. Whilst this represents a fraction of this kind of material available on the internet, it does point to the challenge faced by content moderators.
Tech Against Terrorism has investigated terrorist and violent extremist experimentation with generative artificial intelligence (AI).
We have identified users exploiting generative AI tools to bolster the creation and dissemination of AI-generated propaganda in support of both violent Islamist and neo-Nazi ideologies. We have also archived over 5,000 pieces of AI-generated content shared in terrorist and violent extremist spaces.
We identified:
While these examples suggest tentative experimentation with generative AI, this experimentation is likely to be indicative of its future exploitation by terrorists and violent extremists.
We have also developed a taxonomy that categorises the ways in which terrorists and violent extremists could use generative AI, both now and in the future. One particular risk is the undermining of hash-based detection tools. These mechanisms are used throughout the tech industry, and AI-enabled variations of pre-existing and new content could render these tools obsolete.
However, while at risk of exploitation, generative AI offers significant opportunities for countering terrorist use of the internet. Tech Against Terrorism will continue to work with generative AI platforms, civil society, academia and governments to ensure all relevant parties are best-equipped to mitigate this threat and to provide proactive solutions.