You can access the ORS Handbook here
The European Union (EU) is an influential voice in the global debate on regulation of online speech. For that reason, two upcoming regulatory regimes might – in addition to shaping EU digital policy – create global precedents for how to regulate both online speech generally and terrorist content specifically.
European Union’s regulatory framework:
Proposed regulation:
Key organisations and forums:
Collaborative scheme:
Key takeaways for tech platforms:
EU Counterterrorism strategy
The EU’s Counter Terrorism Strategy, launched in 2005, provides a framework for the Union to respond to terrorism across four strands: prevent, protect, pursue, and respond. Whilst the strategy does not focus on terrorist use of the internet, it does mention the need to counter this as part of its “prevent” strand.
Many of the texts and bodies involved in tackling terrorist use of the internet in the EU came into fruition around 2015. In April of 2015, the EU adopted the European Agenda on Security, which addresses preventing terrorism and radicalisation that leads to terrorism at length, including terrorist use of the internet. The Agenda also committed the EU to setting up two collaborative schemes: Europol’s EU Internet Referral Unit (EU IRU) and the EU Internet Forum.
The key regulatory document guiding the EU-wide counterterrorism response is Directive 2017/451 (also known as the “Terrorism Directive”). The Directive replaced previous texts (such as Council Framework Decision 2002/475/JHA) and provides definitions of key terms, including of “terrorist groups,” “terrorist offences”, and terrorist propaganda (“public provocation to commit a terrorist offence”). The Directive was partly introduced to better reflect the need to tackle terrorist use of the internet, and lays down guidelines for Member States to address this threat. For example, the Directive instructs Member States to ensure “prompt removal” of online terrorist content, whilst stressing that such efforts should be based on an “adequate level of legal certainty” and ensure that there are appropriate redress mechanisms in place.
Online terrorist content: current regulatory landscape
The main legal act outlining tech company responsibilities with regards to illegal and harmful content is the E-Commerce Directive of 2000. Whilst initially meant to break down obstacles to cross-border online services in the EU, the E-Commerce Directive also exempts tech companies from liability for illegal content (including terrorist content) that users create and share on their platforms, provided they act “expeditiously” to remove it.[4] Further, Article 15 outlines that tech companies providing have no obligation to monitor their platforms for illegal content. This arrangement is being reconsidered by the EU, both through the proposed regulation to combat online terrorist content and the Digital Services Act.
In 2018, the EU updated its Audio-Visual Media Services Directive (AVMSD), which governs Union-wide coordination of national legislation on audio-visual services (such as television broadcasts), to include online video-sharing platforms (VSPs). It encourages Member States to ensure that VSPs under their jurisdiction comply with the requirements set out in the AVMSD, including preventing the dissemination of terrorist content. In a communication, the European Commission specified that VSP status primarily concerns platforms who either have the sharing of user-generated video content as its main purpose or as one of its core purposes, meaning that in theory the AVMSD could apply to social media platforms on which videos are shared, including livestreaming functions.
Proposed regulation on preventing the dissemination of terrorist content online
In September 2018, the EU Commission introduced a proposed “regulation on preventing the dissemination of terrorist content online”. The regulation has since undergone the EU’s legislative trilogue process of negotiation between the Commission, Parliament, and the Council. To date, only Parliament’s reading of the proposal has been published in full.
The proposal suggests three main instruments to regulate online terrorist content:
The Commission’s proposal drew criticism from academics, experts, and civil society groups. Further, the proposed regulation was criticised by three separate UN Special Rapporteurs, the Council of Europe, and the EU’s own Fundamental Rights Agency, which said that the proposal is in possible violation of the EU Charter for Fundamental Rights. Criticism mainly concerns the short removal deadline and the proactive measures instrument, which according to critics will lead to companies erring on the side of removal to avoid penalty fees.
Whilst the regulation clarifies that its definition of “terrorist content” is based on the Terrorism Directive, there have been concerns that companies – due to the risk of fines – might remove content shared for journalistic and academic purposes. There has also been criticism raised against the referral mechanism, since this allows for tech company Terms of Service, as opposed to the rule of law, to dictate what content gets removed for counterterrorism purposes. Content moderation expert Daphne Keller has called this the “rule of ToS.” At Tech Against Terrorism, we have cautioned against the proposal’s potential negative impact on smaller tech companies, and warned against the potential fragmentation that it risks leading to. We also encourage the EU to provide more clarity as to what evidence base motivates the one-hour removal deadline.
The EU Parliament’s reading of the proposal, unveiled in April 2019, provided some changes, for example by deleting the referral instrument and limiting the scope to “public” dissemination of terrorist content to avoid covering private communications and cloud infrastructure. These changes were largely welcomed by civil society groups. Although a version of the proposal worked on by the Council, which reintroduces some of the elements that Parliament modified, was leaked in March 2020, there has been no confirmation as to what a final version of the regulation will look like.
EU-led voluntary collaborative forums to tackle terrorist use of the internet
Whilst there is currently no EU-wide legislation regulating terrorist use of the internet, the EU has been influential in encouraging tech company action on terrorist content via a number of forums.
[1] In EU law-making, a “Directive” is a legislative act sets out goals that all EU countries must achieve, however without specifying exactly how to reach these targets. For more information, see: https://europa.eu/european-union/law/legal-acts_en
[2] The negotiation process between the EU’s three legislative bodies: the European Commission (which proposes regulation), the EU Parliament, and the Council of the EU, who are able to suggest changes to the proposed text before its adoption.
[3] Unlike a Directive, a Regulation is legally binding and must be applied in its entirety across the EU.
[4] This has some similarity to the US Section 230 of the US Communications Decency Act exempts tech companies from legal liability for user-generated content located on their platforms.
[5] By censorship creep, Citron means that online counterterrorism efforts or mechanisms risk taking on functions and having reach beyond its intended purpose, which risks leading to censorship of legal and legitimate speech online.
Resources:
Hadley Adam & Berntsson Jacob (2020), “The EU’s terrorist content regulation: concerns about effectiveness and impact on smaller tech platforms”, vox-pol
Tech Against Terrorism (2020) Summary of our response to the EU Digital Services Act consultation process, Tech Against Terrorism
Kaye David, Ni Aoilain Fionnuala, Cannataci Joseph (2018) Letter from the mandates of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights, UNOHCR
Citron Danielle (2018), “Extremist Speech, Compelled Conformity, and Censorship Creep”, Notre Dame Law Review
Keller Daphne (2019), “The EU's terrorist content regulation: expanding the rule of platform terms of service and exporting expression restrictions from the eu's most conservative member states”, Stanford Cyber Policy Center
Article 19, Article 19’s Recommendations for the EU Digital Services Act
AccessNow (2020), “How the Digital Services Act could hack Big Tech’s human rights problem”
Europol (2019), EU IRU 2018 transparency report
Europol (2020), EU IRU 2019 transparency report