1 min read
Reader's Digest – 15 May
Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy. We...
We interrupt this broadcast for another announcement: our latest episode of the Tech Against Terrorism Podcast is now live. In this episode Flora Deverell and Jacob Berntsson discuss the ways in which online regulation is being pursued by companies, governments, and multi-lateral organisations, such as with the upcoming EU wide law on the dissemination of terrorist content. They are joined by two of the foremost voices in this space: Evelyn Douek, a lecturer in law and SJD candidate at Harvard Law School, and affiliate at the Berkman Klein Center for Internet & Society, studying international and transnational regulation of online speech; and Daphne Keller, Director of Platform Regulation at Stanford’s Cyber Policy Center – formerly Assistant General Counsel at Google and Director of Intermediary Liability at Stanford’s Center for Internet and Society – who has worked on groundbreaking Intermediary Liability litigation and legislation around the world. They also explore the implications of Facebook’s new Oversight Board and what this really means for governance and accountability processes, whether we should use international human rights law as a framework for ruling the internet, and why terrorist content is such an important topic in regulatory discourse.
– Motives of far-right and Islamist terrorists ‘eerily similar’: In this article, Leanne Close discusses the recent findings by Professor Boaz Ganor, who examined the terrorist’s manifesto released before the Christchurch attack in April 2019. Ganor highlights significant similarities between the manifesto and Islamist terrorism propaganda, categorising these similarities into eight criteria: altruism, defensive action, the target, modus operandi, restoring old glory, call for action, and a sense of urgency. Whilst these factors have different ideological components for far-right and Islamist violent extremists and terrorists, they are still evident as macro-ideas in both types of extremism. In addition, the report emphasises that whilst social media platforms were swift in taking down the livestream of the Christchurch attack, the footage remains accessible, due to what he describes as “the lawless nature of the internet.” Ganor concludes his report by stressing the need for law enforcement agencies to employ data analytics to detect individuals undergoing radicalisation and prevent further terrorist activities. (Close, The Strategist, 19.05.2020)
– ISIS 2020: new structures and leaders in Iraq revealed: This report by the Center for Global Policy sheds light on current Islamic State (IS) leadership, inferred from Iraqi intelligence. It establishes that IS holds two high-level committees under the new “caliph”: a Shura (Consultative) Council and a Delegated Committee, the highest executive body of IS. Besides these committees, the Center for Global Policy notes that IS, as a whole, is now divided between 14 provinces (wilayat), five ministries, and one department for immigration and administration. The report asses IS’ capacity to be of 3500-4000 active fighters within Iraq, as well as 8000 non-active fighters. With this restructure, it appears that IS’ internal communications and its rules on the use of technology has also been tightened. The report concludes that this new structure of IS leadership in Iraq creates opportunities for agility, generation of funds, security operations, and recruitment. (Al-Hashimi, Center for Global Policy, 19.05.2020)
– The Caravan: Abdallah Azzam and the rise of global jihad review – recent history at its finest: Abdallah Azzam, the Palestinian cleric who is seen as the most influential figure in Islamist terrorist history before al-Qaeda and 9/11, receives a new life history from Thomas Hegghammer’s latest book – reviewed by Ian Black here. The book discusses the life, times, and significance of the man who “internationalized jihad by interpreting it theologically as a duty”. Black notably stresses Hegghammer’s conclusion that Azzam was never involved with al-Qaeda, but kept in contact with Bin-Laden until his death. In addition, the book also counters existing beliefs that 9/11 was a counter-reaction to the US’ promotion of the anti-Soviet jihad in Afghanistan to further their own cold war interests. Finally, Hegghammer’s book suggests that the death of Azzam was based on Pakistani or Afghan intelligence, which again amplifies the extent of Azzam’s influence. (Black, The Guardian, 17.05.2020)
– Canada police say machete killing was ‘incel’ terror attack: A machete attack in Toronto in February, which led to the death of one woman, is being considered by the Canadian Police as an act of terrorism motivated by incel ideology, Leyland Cecco reports. While incel-motivated attacks have been on the rise in the US and Canada in recent years (the Toronto van attack in 2018, which killed 10 people, is also considered to have been motivated by this violent ideology), Cecco highlights that the machete attack is believed to be the first time that terrorism charges have been applied to an incel-motivated attack by law enforcement. (Cecco, The Guardian, 19.05.2020)
– Exploring radical right-wing posting behaviors online: Ryan Scrivens examines how sentiment analysis-based algorithms can identify and analyse radical right-wing posting behaviours online. In doing so, Scrivens describes three types of users on the right-wing extremist platform, Stormfront: high-intensity, high-frequency, and high-duration users. He concludes that whilst the high-intensity posters are active over a short-time period, they post negative to strongly negative messages that are clearly communicated and detailed. Whereas the high-frequency users typically post a high volume of moderately-negative content, over a moderate time-frame. Finally, he identifies the high-duration users as posting strongly negative content over a large duration of time, particularly aimed at the Jewish community. Taken together, Scrivens concludes that radical right-wing posting behavior is varied and complex, which has implications for law enforcement and intelligence agencies. (Scrivens, Deviant Behavior 26.04.2020)
– Boogaloodemic: In the latest episode of the SH!TPOST podcast Tess Owen from Vice News develops on how the Boogaloo meme is translating from an online phenomenon to real-life activities. (SH!TPOST Podcast, 12.05.2020)
To learn more about this topic, you can listen to the Tech Against Terrorism Podcast episode on far-right violent extremists and meme culture, during which we discussed the use of memes as an unconventional strategy for violent extremists to easily spread their ideology with Maik Fieltiz, a fellow at the Centre of Analysis of the Radical Right, and Lisa Bogerts, and expert on visual communication.
– Early warning? Opportunities and limits of automated internet monitoring: In this article, Dr. Robert Pelzer provides an overview of the state of automated tools to assist with the online identification of radicalised individuals and “terrorist intentions in online behaviour.” Taking the example of the “Profile Risk Assessment Tool” (PRAT) used to identify future lone offenders and based mostly on research on online communication prior to an attack and language indicators, Pelzer underlines the current limits of automated tools to identify future offenders. The main cause for limitation identified by Pelzer relies on the little evidence available due to the “insufficient obtainable data.” Even in the case of an identification of a potential attacker, Pelzer stresses that any automated predictive tool would need further interpretation to identify a “real intention to commit violence.” Pelzer concludes by underlining important ethical and legal considerations that surround the potential automated analysis of all user-general online content. (Dr. Pelzer, GNET, 14.05.2020)
– Unconvicted terrorism suspects face indefinite controls under UK bill: The new counter-terrorism and sentencing bill currently being discussed by the UK Parliament – that comes in response to the two most recent terrorist attacks in London, both committed by terrorist offenders recently released from jail – would bring significant changes to current counter-terrorism legislation in the UK. Jamie Grierson reports that this proposed bill would notably remove the two-year cap on the use of terrorism prevention and investigation measures (Tpims), which are mostly based on secret intelligence, and would lessen the burden of proof necessary for said measures to be imposed. Providing an overview of the new changes this bill would bring, which also include a “serious terrorism sentence” with a minimum of 14-years jail term, Grierson also discusses the concerns regarding the curtailing of individuals’ liberty that could come with the new application of Tpims. (Grierson, The Guardian, 20.05.2020)
– Introducing the Twitch Safety Advisory Council: Last week, Twitch (a streaming platform especially popular amongst gamers) announced the formation of its Safety Advisory Council. This new council will work to keep Twitch’s user base “safe and healthy” by contributing to Twitch’s policy processes, development of new safety and moderation features, promoting the “interests of marginalized groups” – as well as working to identify trends that “could impact the Twitch experience” and ensure “healthy streaming and work-life balance habits.” To form this new Council, Twitch has recruited both experts on online safety, amongst whom Emma Llanso Director of the Free Expression Project at the Center for Democracy and Technology, and Twitch Creators. Through the balance between creators and experts Twitch aims to ensure a proper understanding of the Twitch platform and its community. (Twitch, 14.05.2020)
Tech Against Terrorism had the pleasure to welcome Emma Llanso as a guest speaker for our podcast on how we fight terrorism while protecting human rights, and for our recent webinar, in collaboration with the GIFCT, on transparency reporting for smaller tech companies.
– Facebook’s Oversight Board: A meaningful turn towards international human rights standards?: In this article, Sejal Parmar dwells on the recently announced Facebook Oversight Board, which she identifies as the latest move by Facebook in a shift towards a more international human rights based approach. Parmar begins by reviewing how this move from a “U.S constitutional-law paradigm towards an international human right approach in content moderation” has taken place since 2018, with the company increasingly relying on international human right standards (such as the International Covenant on Civil and Political Rights). Whilst Parmar welcomes this shift as it provides Facebook with an increased sense of global legitimacy, she also underlines the limitations and challenges of the Board. Notably, Parmar identifies the circumspection of the Board’s jurisdiction by bylaws as one of its main limitations, as the board won’t consider content already deemed illegal or unlawful in a jurisdiction related to the content. She also stresses the challenges for the Board regarding the interplay between freedom of speech and other human rights such as the right to privacy or the right to equality before the law. (Parmar, Just Security, 20.05.2020)
– Supreme Court rejects lawsuit against Facebook for hosting terrorists: Adi Robertson reports here that by declining to hear the Force v. Facebook case, the US Supreme Court reaffirms the Section 230 of the Communication Decency Act as one of the main lines of defense for social media companies when accused of allowing the spread of terrorist content on their platforms. As Robertson reminds us, The Force v. Facebook was a 2016 lawsuit brought against the social media arguing that “Facebook knowingly hosted accounts belonging to Hamas, which the US classifies as a terrorist organization.” Whilst Section 230 protects online platforms from being sued for user-generated content, Robertson underlines that the lawsuit argued that Facebook’s algorithm played a role in promoting terrorist content. However the argument was found unconvincing by the Second Circuit appeals court. (Robertson, The Verge, 18.05.2020)
For any questions, please get in touch via:
contact@techagainstterrorism.org
Tech Against Terrorism is an initiative launched by the United Nations Counter Terrorism Executive Directorate (UN CTED) in April 2017. We support the global technology sector in responding to terrorist use of the internet whilst respecting human rights, and we work to promote public-private partnerships to mitigate this threat. Our research shows that terrorist groups - both jihadist and far-right terrorists - consistently exploit smaller tech platforms when disseminating propaganda. At Tech Against Terrorism, our mission is to support smaller tech companies in tackling this threat whilst respecting human rights and to provide companies with practical tools to facilitate this process. As a public-private partnership, the initiative has been supported by the Global Internet Forum to Counter Terrorism (GIFCT) and the governments of Spain, Switzerland, the Republic of Korea, and Canada.
1 min read
Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy. We...
Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy.
1 min read
Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy. -Terrorist...