5 min read

Reader's Digest – 20 November 2020

Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.

Webinar Alert!

How can UN agencies support the counterterrorism efforts of smaller tech platforms whilst safeguarding human rights and freedom of expression? What are the existing avenues of cooperation between tech platforms and intergovernmental organisations? In our upcoming webinar, cooperation between the UN and smaller tech platforms in countering use of the Internet for terrorist purposes, we aim to shed light on these issues. The webinar will be held on Wednesday, 9 December, 4pm GMT. You can register here. Organised in partnership with UN CTED. 
 
When tackling terrorist use of the internet, is content removal really our only option? Our upcoming webinar - on Wednesday, 16 December, at 5pm GMT - will look at what alternative steps tech companies can take. We have an exciting panel of experts and practitioners lined up – don’t forget to register here

Top stories

  • In an article this week, Human Rights Watch has cautioned how the draft European Union legislation on criminalising the dissemination of terrorist content online presents risks for both freedom of speech, and the rule of law.
     
  • Facebook has announced they will use AI to lead their content moderation queue, and prioritise what content needs further review by human moderators.
     
  • Our director Adam Hadley, as founder of our parent organisation the Online Harms Foundation, was quoted in an article on a proposal made by the UK Labour Party to introduce fines for tech companies who fail to take down anti-vaxxing content”
     
  • The International Centre for Counter-Terrorism has released a special edition on the Islamic State’s (IS) global insurgency and its implications for a counterstrategy.
  • Facebook has published its latest transparency report on community standards enforcement for both Facebook and Instagram. The report covers the period of July - September 2020. For the first time, Facebook’s report also includes data related to hate speech. In the period covered, Facebook removed 9.7M pieces of terrorist content, 99.7% of which was pro-actively found before users reported it.

    Tech Policy

    Can social networking platforms prevent polarisation and violent extremism?: This article by Vivian Gerrand, examines whether social media companies can prevent polarisation and extremism, and argues that the success of content moderation depends on offline efforts and is not effective on its own to counter the underlying vulnerabilities that might lead someone to extremism. She shows how disinformation around the US election, as well as the recent terrorist attacks in Paris and Nice, have reignited the debate on how social media companies need to regulate hate speech and extremism spreading on their platforms. She highlights how current emphasis is on the algorithmic identification of harmful content and the deplatforming of extremist groups and actors, and argues that this censorship of polarisation online does not tackle the factors that might make someone vulnerable to radicalisation. She identifies the underlying causes that might make someone vulnerable to extremism, or adhere to the narratives of conspiracy beliefs such as QAnon, are dependent on underlying socio-economic factors. Therefore, she argues that an intersectional and crossline approach is necessary to make people more resilient to polarisation and violent extremism, both offline and online. This crossline approach would combine targeted online interventions with compassionate, democratic governance that would reintroduce trust by the public. She highlights that this reintroduction of trust is necessary as COVID-19 has created a vulnerable target audience for terrorist and extremist actors, as well as for conspiracy theories to take hold. (Gerrand, Open Democracy, 13.11.20)

    Far-right violent extremism and terrorism

    What we get wrong about online radicalisation: Dimitrios Kalantzis discusses how the public’s understanding on online radicalisation is incomplete, and how the role of the offline sphere is often overlooked, particularly by governments in their response to radicalisation. Kalantzis consults a former neo-Nazi, who states that he radicalised by viewing YouTube videos but highlights that, when having recruited others, he used the online sphere to appeal to the underlying causes that make someone vulnerable to radicalisation which are of a political, emotional and economic nature. In addition, he highlights how the creation of echo-chambers and a false sense of community found online, can aid someone’s radicalisation process with the aim of acting offline. Kalantzis also stresses that whilst online hate speech might be at an unprecedented high, this does not necessarily mean that individuals espousing this hate actually become violent offline. Whilst in some cases, such as in the Pittsburgh attack on a synagogue in 2018, where the attacker’s online activity on 8chan seems to have foreboded his attack, for others the direct pathway from the online sphere to violence is unclear. Kalantzis then discusses governments’ response to terrorism, which he shows is centered around online radicalisation, neglecting the importance of the offline realm. Therefore, the article concludes that more research needs to go into how we understand the role of the online sphere in radicalisation, how it might lead to violence in some cases and not in others, and how counter-narratives could be used to prevent this from occurring. (Kalantzis, Vox-pol, 18.11.20)

    Islamist terrorism

    Vienna attack: the pathway of a prospective (foreign) terrorist fighter: This piece, by Tanya Mehra and Julie Coleman, analyses whether the violent Islamist-inspired attack in Vienna on November 2, could have been prevented, as well as analyses the Austrian government’s response to the attack. Mehra and Coleman firstly assess the uncertainty of whether the perpetrator followed deradicalisation programmes during his parole, after being released from prison for attempting to travel to the Islamic State (IS). In addition, Mehra and Coleman analyse the opportunities of the Austrian government to have intervened before the attack took place. The opportunities include the attacker’s efforts to acquire ammunition, which was alerted by the Slovakian government but not followed up by the Austrian security services, as well as the attacker’s Instagram post hours before the attack. Regarding the latter, Mehra and Coleman argue that the officials in charge of monitoring the perpetrator during his parole should have picked up on his social media post. Finally, the article analyses the Austrian government’s response to the attack, namely, its clamping down on political Islam and the introduction of potential life-sentences for terrorist offenders. Mehra and Coleman argue that longer sentences do not necessarily lead to a safer society, nor do they respect human rights. In addition, they explain that a clampdown on political Islam further marginalises the Muslim community, as it goes beyond the rare violent manifestations of political Islam and restricts freedom of religion. In conclusion, they argue that further investigation should go into this case, which will illustrate the shortcomings of the existing measures, before introducing new ones. (Mehra and Coleman, International Centre for Counter-Terrorism, 16.11.20).

    We are also listening to the Lawfare podcast, discussing the effects of the killing of Al-Qaeda’s number 2, Abu Muhammad al-Masri, on al-Qaeda’s leadership. The podcast also talks about the leader of al-Qaeda, Ayman al-Zawahiri who is rumoured to have been killed as well and the implications for the organisation if this is the case.  

    Counterterrorism

    Former extremists play a key role in combating terrorism: This article by Ryan Scrivens analyses the advantages of using former extremists in countering radicalisation and extremism, as well as provides key insights into how research has benefited from former extremists’ first-hand accounts. Scrivens defines former extremists as “individuals who subscribed to and/or perpetuated violence in the name of a particular extremist ideology and have since publicly and/or privately denounced extremist violence.” He argues that, despite criticisms being raised about the reliability and trustworthiness of using former extremists in research, former extremists provide key insight. He highlights their contributions on what factors might lead to someone’s radicalisation, how radicalisation progresses to violent extremism, what factors might make someone leave extremism behind, and, finally, provide insight into how to counter violent extremism. Scrivens highlights how former extremists shed light on how practitioners should gather intelligence, how they should intervene in someone’s radicalisation process, and counter-narratives can be utilised. He concludes that more research should include former extremists. (Scrivens, RANTT, 16.11.20)

  • Reader's Digest - 21 February 2020

    Tech Against Terrorism Reader's Digest 21 February 2020 Our weekly review of articles on terrorist and violent extremist use of the Internet,...

    Read More