Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.


We interrupt this broadcast for a special announcement, our latest episode of the Tech Against Terrorism Podcast is liveIn this episode, join Maygane Janin and Flora Deverell as they discuss how terrorist and violent extremists exploit gaming culture for their own ends. They are joined by Linda Schlegel, a senior editor at The Counterterrorism Group and a regular contributor for the European Eye on Radicalization, where she recently published a number of articles on the exploitation of gaming culture; and Dr. Nick Robinson, an associate professor in politics and international studies at the University of Leeds who has been researching the links between videogames, social media, militarism, and terrorism for over a decade. They address in particular the “gamification of radicalisation,” the exploitation of gaming platforms, as well as why terrorist organisations developing their own games to serve their own ideologies and purposes is less prevalent now than it used to be.


Islamist terrorism

‘We are used to virus called bombs’:Somalia has not been spared by the COVID-19 pandemic. In a country that already faces famine and terrorism, the impact of the pandemic could be particularly significant, as Subban Jama and Ayan Abdullahi analyse here. Jama and Abdullahi stress how the health crisis could benefit al-Shabab, which is known to capitalise on power vacuums, and states’ security gaps, and economic hardship – all triggering factors likely to be heightened as the virus spreads in the country. (Jama and Abdullahi, Foreign Policy, 12.11.2020)


Far-right violent extremism and terrorism

It’s time to get serious about sanctioning global white supremacist groups: In an “unprecedented” move last month, the US State Department designated the Russian Imperial Movement – an ultranationalist white supremacist group that trained individuals to lead attacks that were then carried out in Sweden – as a global terrorist organisation. This was the first designation of its kind for a far-right group, which can now be targeted by US government financial sanctions. Commenting on this move, Daniel Glaser and Hagar Chemali argue that this designation is important in targeting far-right violent extremist networks, but stress that it should be followed by an “intelligence and aggressive follow-up strategy.” In particular, the authors emphasise the importance of a financial campaign led by the Treasury Department to dismantle far-right violent extremist and terrorist networks. So far, no such groups have been designated by the Treasury Department. (Glaser & Chemali, Washington Post, 11.05.2020)

Far-right Britain First leader Paul Golding banned from Youtube:Scram reports that Paul Golding – leader of Britain First – has had his channel banned from Youtube. According to Scram, Golding’s channel was set up in lieu of Britain First’s channel which was removed in 2019, but has now been banned for “multiple or severe violations” of the platform’s hate speech policy. Scram further reports that in reaction to the ban, Golding has announced that Britain First would now “concentrate its efforts on Russian social media network VK.” (Scram News, 11.05.2020)

Britain First was also banned from TikTok for violation of the company’s hate speech policy last month, alongside Tommy Robinson – co-founder and former leader of the English Defence League. You can read TellMAMA’s report about this here.


Counterterrorism

– Weighing the value and risks of deplatformingIn this Insight piece, Ryan Greer dwells on the unintended consequences of deplatforming as the default means of addressing online extremist content. Deplatforming, or removal from an online platform following serious and repeated violations a platform’s policy, can have major financial impact on extremists actors and reduce activity on extremist sites according to Greer. However, this default solution is not without drawbacks. Greer provides an overview of the risks of deplatforming, including terrorist and violent extremist attempts to circumvent bans, driving actors to fringe platforms with little (or no) moderation, and eliciting a heightened sense of grievance leading individuals to further communicate with like-minded extremists. He also stresses that deplatforming can hinder law enforcement investigation by pushing terrorists and violent extremists to online spaces more difficult for investigators to access. (Greer, GNET, 11.05.2020)

Remembering Toronto: Two years later, incel terrorism threat lingers:Two years after an Incel motivated van attack killed 10 people in Toronto, Jacob Ware, Bruce Hoffman and Ezra Shapiro assess the state of the threat of incel violent extremism and terrorism. The piece analyses incels’ online behaviour, from moving to the dark web and their practice of shitposting, to the fragmented nature of the incel community. Ware, Hoffman and Shapiro call for increased research on the issue to counter this phenomenon: not only in relation to the incel community, but also to understand the “role that sexual frustration and male aggrieved entitlement” can play in violent extremist radicalisation in other movements. (Hoffman, Shapiro, Ware, GNET, 06.11.2020)

You can find their full analysis on “Assessing the threat of incel violence” here.


Tech policy

– An update on combating hate and dangerous organizationsTo mark the first anniversary of the Christchurch Call to Action, the founders of the Global Internet Forum to Counter Terrorism (GIFCT) – Amazon, Facebook, Google, Microsoft, and Twitter – released a statement to reassert their commitment to prevent terrorist and violent extremist (T/VE) exploitation of their platforms. In this statement, the GIFCT highlights the crisis protocol established following the attack in Christchurch and its continuous proactive work to counter T/VE use of the internet, especially through the launch of dedicated working groups.

Tech Against Terrorism is pleased to announce that we will be chairing the working group on “technical approaches.” You can read more about this in our press release.

Following the GIFCT statement, Facebook continues by providing an update on the company’s commitment to counter hate and dangerous organisations on its platform. Facebook notably develops on its playbook of automated techniques to detect terrorist content and on the progress made in this regard. Facebook is now able “to detect text embedded in images and videos in order to understand its full context,” and is expanding this technology – originally developed to identify Islamic State and al-Qaeda content – to other violent extremist ideologies. In this update, Facebook develops on its enforcement tactics and on how the company is learning about banned organisations’ attempts to bypass detection and removal to better counter this phenomenon. Facebook also takes this opportunity to share metrics on content removals on its platform since the beginning of 2020, as it recently released its latest transparency report on Community Standards enforcement.  (Facebook, 14.05.2020)

– Community standards report, May 2020 editionFacebook just released its latest transparency report on enforcement policy on Facebook and Instagram, for October 2019 to March 2020. Amongst the new metrics included in this report, Facebook reports for the first time on the number of appeals made by users for content removal decisions Instagram. Facebook also develops on its improved use of technology to locate violating content proactively. (Facebook, 12.05.2020)

– Twitter assigné en justice pour son « inaction massive » face aux messages haineuxFollowing a recent evaluation of Twitter’s content moderation practices on hate speech content – the evaluation focused on hate and racist content that would be interpreted as illegal under French law – four civil society organisations in France are filing a lawsuit against Twitter. The organisations are requesting that the court designate a judiciary expert in the matter, to whom Twitter should hand out all documents regarding its content moderation processes. If this request for a judiciary expert is approved, the organisations are hoping for the “number, location, nationality, language and profile” of Twitter moderators to be shared. They are also asking for Twitter to report on the number of tweets reported regarding incitation à la haine (incitement to hatred) and apologie des crimes contre l’humanité (glorification and justification of crimes against humanity). This report should also include details of Twitter’s criteria to process such content. (Article in French, Untersinger, Le Monde, 12.05.2020) Untersinger, Le Monde, 12.05.2020)

– Très contestée, la “loi Avia” contre la cyberhaine devinet realitéThe French National Assembly approved a new legislation yesterday to counter “cyberhate.” Julien Lausson reports here on this new legislation, set to come into effect from July 1st, which will cover terrorist content – including content glorifying terrorism,  – as well as a broad range of online content deemed “provoking hatred, violence and discrimination, condoning certain crimes, committing aggravated insults, denying crimes against humanity”. Under this new law, tech companies will have 24 hours to removed actioned content form their platforms, or risk a €1.25 million (or up to 4% of a company’s global annual turnover) fine. For some terrorist and child sexual abuse content actioned by a public authority, the removal deadline will be brought down to 1h. In addition, this new law will allow for the blocking of ‘mirror sites’ that archive hateful content and for the establishment of a special prosecutor office to deal with hateful content online. The new law has drawn much criticism, notably with regard to its broad application that the National Human Rights Commission deemed a disproportional threat to freedom of expression. Other commentators have stressed the difficulty for platforms to comply with these new obligations. (Article in French, Lausson, Numerama, 13.05.2020)

– What kind of oversight board have you given us?: With the 20 first members of Facebook’s Oversight Board being announced last week, Evelyn Douek provides a refresher of her previous analysis on the Board. In this article, Douek looks at which cases the Board will take, how it cases will be chosen, what standards will inform its work and analyses how big the Board’s impact will be. Douek concludes that, although the Board has some limitations, it represents the “least-worst option” as a middle way between platform-driven decision-making and “heavy-handed government involvement” in speech regulation. (Douek, University of Chicago Law Review, 11.05.2020)

Spoiler alert: Evelyn will be a guest on the next episode of the Tech Against Terrorism Podcast, discussing both the Oversight Board and online regulation in general. Watch this space!

The Columbia Journalism Review, via its Galley series, has hosted insightful discussions on the Board with a range of experts, including Evelyn Douek, UN Special Rapporteur on Freedom of Expression David KayeRebecca MacKinnonDaphne Keller (another future podcast guest!), Alex Stamos, and (board member) Alan Rusbridger. Do check them out.


For any questions, please get in touch via:
[email protected]


Background to Tech Against Terrorism

Tech Against Terrorism is an initiative launched by the United Nations Counter Terrorism Executive Directorate (UN CTED) in April 2017. We support the global technology sector in responding to terrorist use of the internet whilst respecting human rights, and we work to promote public-private partnerships to mitigate this threat. Our research shows that terrorist groups – both jihadist and far-right terrorists – consistently exploit smaller tech platforms when disseminating propaganda. At Tech Against Terrorism, our mission is to support smaller tech companies in tackling this threat whilst respecting human rights and to provide companies with practical tools to facilitate this process. As a public-private partnership, the initiative has been supported by the Global Internet Forum to Counter Terrorism (GIFCT) and the governments of Spain, Switzerland, the Republic of Korea, and Canada.