Press release: 10th April 2019 - Launching an updated version of Jihadology to limit terrorist exploitation of the site
10th April 2019
2 min read
Tech Against Terrorism May 31, 2024 3:49:39 PM
Tech Against Terrorism welcomes OpenAI’s first ever report on how its technology is being abused by threat actors and the steps they are taking to disrupt exploitation.
In releasing its disinformation report, ‘AI and Covert Influence Operations: Latest Trends’, OpenAI gives crucial insight into how its groundbreaking platform is being used for the purpose of deceiving and covertly influencing internet users. Deception and disinformation is an explicit terrorist tactic in the deployment of automated chatbots to regurgitate extremist perspectives for the purpose of radicalising vulnerable users.
As Tech Against Terrorism established in its own report last year, The Early Terrorist Adoption of Generative AI, this emerging technology is increasingly important to the production and proliferation of terrorist content. Since this can be a time-consuming and labour-intensive endeavour, it is no surprise that threat actors have sought to adopt a technological solution which promises to revolutionise productivity.
Tech Against Terrorism has seen generative AI employed by terrorists and violent extremists to automate not only the creation of content on the basis of simple narrative prompts, but to disseminate propaganda and amplify its reach. Tech Against Terrorism has archived over 20,000 pieces of content created using generative AI: this accounts for only a small portion of the significantly greater quantity that is highly likely to be available on messaging apps. The terrorist practices we have observed are mirrored in the warnings issued by the Internet Watch Foundation, which has also raised concerns about offline LLMs deployed to create content within the field of Child Sexual Abuse Material. While experimentation with generative AI by threat actors, much like the technology itself, may well be in its infancy, the genie is nonetheless out of the bottle, and the threat landscape has been permanently altered.
While threat actors have a headstart in exploiting the capabilities of generative AI, those of us charged with disrupting their activity have a unique opportunity to specifically use generative AI to disrupt the threat. First, generative AI should be used to understand terrorist content online. Second, the technology should be leveraged to detect the scale of the threat. Finally, generative AI should be deployed to intervene against the threat, used for counter-radicalisation and thwart dissemination.
Commenting on OpenAI’s report, Adam Hadley, founding Executive Director of Tech Against Terrorism said: “We welcome OpenAI’s recognition that their service is vulnerable to exploitation by terrorists and violent extremists. This is a crucial first step towards limiting the range of deceptive practices available to threat actors. Generative AI has enormous potential as an investigative and analytical tool, which Tech Against Terrorism has sought to harness through previous partnerships with Microsoft and Azure. OpenAI’s willingness to deploy their own technology in this way to prevent further exploitation of their platform is an equally laudable approach, and one that we hope other platforms will emulate.”
In issuing its report, OpenAI has also demonstrated best practice for transparency. Candour from tech platforms is essential to building trust with a platform’s community of users and sets a standard of reporting, as well as a practical approach, which we urge the wider tech sector to adopt.
10th April 2019
Tech Against Terrorism will continue its ongoing work with Google Jigsaw by leading the maintenance and development efforts on the Altitude content...
Tech Against Terrorism is pleased to announce that the Government of Canada has awarded Tech Against Terrorism a grant to expand our work supporting...