You can access the ORS Handbook here
The United Kingdom has set out an ambitious online regulatory framework in its Online Harms White Paper, aiming to make the UK “the safest place in the world to be online” by countering various online harms ranging from cyberbullying to terrorist content. This is yet to come into effect, but the UK has approved an interim regime to fulfil obligations under the European Union Directive, which the UK needs to comply with during Brexit negotiations. The UK also has extensive counterterrorism legislation criminalising the viewing and sharing of terrorist content online.
UK’s regulatory framework:
- The Online Harms White Paper was published in April 2019 and outlines the key principles for online regulation in the UK. The paper suggests that tech companies should have a “mandatory duty of care” to protect users from “online harms”. The draft law has since undergone consultation and is expected to be introduced into parliament in 2021.
- The Terrorism Act 2000 is a cornerstone of UK terrorism legislation. Section 58 of the Act specifies the offence of possessing information, including via online means, that is "useful to a terrorist" .
- The Terrorism Act 2006 creates new offences related to terrorism, as well as amends existing ones. A relevant example is Section 2, which makes it an offence to disseminate terrorist propaganda for “terrorist purposes”.
- The Counter-terrorism and Border Security Act 2019 amends section 58 of the Terrorism Act. It also criminalises obtaining or viewing such material online.
- The Interim Approach, put in place whilst awaiting introduction of the Online Harms regime, is due to come into effect on 1 November 2020. It sets out an interim regime for online VSPs to meet the UK’s obligation of content regulation under the EU’s Audiovisual Media Services Directive (AVMSD) 2018. The Government has transposed the VSP framework into Part 4B of the Communications Act 2003 (“the Act”).
Main body overseeing online regulation:
- Ofcom, the UK communications regulatory body. Ofcom oversees new regulations, both under the Interim Approach and the proposed online harms legislation.
Key bodies and institutions:
- The UK Internet Referral Unit (CT IRU) detects and refers terrorist content to tech platforms for assessment against companies’ Terms of Service.
- The Department for Digital, Culture, Media and Sport (DCMS) is partly responsible for legislation relating to the Internet and media broadcasting. Together with the Home Office, the DCMS initiated the Online Harms White Paper.
- The Home Office is responsible for security and policing in the UK, including counterterrorism and terrorist use of the internet.
- The Independent Reviewer of Terrorism Legislation scrutinises and reports on terrorism legislation in the UK. The current reviewer is Jonathan Hall.
Key takeaways for tech companies:
Interim Regime:
- As the regulation is set to come into effect on November 1, Ofcom stated that it expects VSPs to assess whether they fall in the remit of the new legislation and to conduct risk assessments to identify what potential harms are to their users
- When in remit of the law, VSPs, regardless of their size, need to protect users under the age of 18 from accessing restricted material[1]
- Regardless of their size, VSPs need to protect all users from “relevant harmful material”
- Relevant harmful material constitutes “any material likely to incite violence or hatred against a group of persons or a member of a group of persons based on particular grounds”
- “It also refers to the inclusion of any material which would be a criminal offence under laws relating to terrorism, child sexual exploitation or racism and xenophobia”
- In doing so, platforms need to regulate such content based on “proportionality”
- In order to decide that proportionality, VSPs need to take into account the size and nature of the service, the type of harm caused, the exposed user’s characteristics and the implications for freedom of expression
- VSPs need to implement an “out of court redress mechanism” to allow for user appeal of content that may have been removed erroneously.
- Ofcom can request VSPs to share information detailing the measures taken on different complaints
- Ofcom can serve enforcement notices and financial penalties of up to £250,000 or 5% of the company’s “qualifying revenue”
- Ofcom has stated that in the “early regulatory period”, it will only serve its enforcement mechanism in instances of a serious breach in compliance showcased by an absence of measures taken by VSPs
- However, it is unclear what will happen when this “early regulatory period” ends
Online Harms proposal:
- The proposed legislation will cover a wide range of “harmful content”, including that which involves child sexual exploitation, cyber bullying, incitement to violence, encouragement of suicide, and terrorist and extremist content
- A two-tier system will be imposed, with terrorist content and child sexual exploitation requiring more extensive action by tech companies than other harms
- All tech platforms that permit online interactions and sharing of content will fall within the legal remit of the new mandatory duty of care to protect users from viewing harmful content online. The proposal suggests the following requirements for tech companies to uphold duty of care:
- Update Terms of Service (ToS) to explicitly mention which content they deem appropriate (or inappropriate) on their platforms
- Produce annual transparency reports
- Introduce an easy-to-access user complaints function
- Respond to user complaints in an “appropriate timeframe” (to be set by Ofcom).3
- A “tiered enforcement system” will be implemented for companies that fail to uphold the “duty of care”, escalating from:
- Substantial fines (no amount has been specified yet)
- Blocking of sites
- Criminal liability for members of a platform’s senior management4
- Internet Service Provider (ISP) blocking for the most “severe” cases
UK counterterrorism legislation:
Terrorism Act 2000, the Terrorism Act 2006 & the Counterterrorism and Border Security Act 2019
In an amendment to article 58 of the Terrorism Act 2000, as written in the Counterterrorism and Border Security Act, viewing terrorist content online just once may give up to 15 years in prison. However, penalisation is dependent on knowing the purpose of that content (it being terrorist in nature), without a reasonable excuse (including journalistic or academic work).
The former Independent Reviewer of Terrorism Legislation, Max Hill, raised questions on the amendment on subsection 58 of the Terrorism Act when it was proposed in 2017. In a response, he and Professor Clive Walker of Leeds University School of Law asked whether an amendment was needed in the first place. They concluded that the existing clauses 1 (the encouragement of terrorism), 2 (the dissemination of terrorist publications), and 5 (the preparation of terrorist acts) of the Terrorism Act 2006 were sufficient for prosecuting and criminalising the online viewing of terrorist content, and so argued that the amendment was not necessary.
The Independent Reviewer subsequently considered the proposed amendment, which at that time still set out to criminalise “repeated viewing” of terrorist content on the Internet. On this premise, the Independent Reviewer identified that the law had the potential to "catch far too many people". However, as mentioned above the final Act went a step further, dropping the “repeated viewing” element and criminalising one-off viewing of terrorist material. The Independent Reviewers' concerns were publicly shared by civil society groups, who cautioned that it might have detrimental impact on freedom of speech.
The Independent Reviewer’s original criticism also identified potential issues with users having to understand the “purpose of content” in order for the law to be effective, arguing that viewing of terrorist content does not necessarily mean that a user understands its purpose. This line of criticism can also be applied to sharing and disseminating content, as again, users might not be aware that the content is there for “terrorist purposes”.
Furthermore, the United Nations special rapporteur on human rights and counter-terrorism, Professor Fionnuala Ní Aoláin, criticised the Counterterrorism and Border Security Act 2019 for being based on a “conveyer-belt” understanding of radicalisation or taking up violence, pointing out that there is little academic support for the theory that an individual will become radicalised by viewing terrorist content alone. Ní Aoláin also stated that whilst there are some protections for academics and journalists, other users will be infringed in their right to impart, seek, and receive information.
Online Harms White Paper
The Online Harms White Paper was published in April 2019 by the UK Home Office and the UK DCMS.
The proposed legislation has not yet entered parliament, but a Consultation process was held in 2019. In total, 2,400 responses were received from a broad range of stakeholders, including larger and smaller tech companies, governments, academics, think-tanks, civil society groups, and publishers.5
The White Paper covers a broad and varying range of online harms, although it distinguishes between “potentially harmful content” and “illegal content”. Illegal content includes child sexual exploitation as well as terrorist content. This distinction was made to ensure the proportionality of the legislation, meaning that extreme content requires “further action” from platforms. However, the legislation does not define terrorist content or what going “further” entails. The proposal limits itself to suggesting that content removal should be preferred for illegal content, whilst other online harms should be addressed by other “content processes in place by tech companies”.[2]
The proposed legislation has received criticism in the following areas:
- Human rights and rule of law concerns: civil society groups, law practices and tech initiatives have criticised the lack of clarity on what constitutes “online harms”. This would leave tech companies with the responsibility of adjudicating what constitutes illegal content and what classifies as “potentially harmful content”, without guidelines on how to assess such material. Due to the enforcement mechanism that awaits tech companies if they fail to identify and address content effectively (high fines as well as potential liability), civil society groups such as Article 19 have warned that this may incentivise companies to err on the side of content removal for both potentially illegal and “harmful” content. This risks the removal of legal and innocuous content, thus hindering digital rights, particularly freedom of speech. It also risks labelling content in the online space as “potentially harmful” or even illegal, despite it being legal offline. Finally, the Global Network Initiative has warned against imposing liability on tech companies on the basis that it likely to lead to the over-removal of content rather than tackling the underlying drivers of terrorist content on the Internet.
- Competition, capacity and innovation concerns: tech initiatives such as Coadec and TechUK have highlighted that smaller tech companies might not have the ability or resources to comply with the proposed requirements, which they say risks harming competition and innovation.
- Legal concerns: Legal experts have questioned the legality of imposing potential intermediary liability on managers at tech platforms, especially how the Online Harms legislation will work alongside the E-Commerce Directive 2000 , which protects tech companies from liability. Legal critics also raised concern over the steep fines and the consequences that might lead to the over removal of content. In addition, Article 15 of the same Directive stipulates that Member States cannot impose general monitoring obligations for Internet platforms, which also raises questions on the extent to which the proposed legislation will uphold this Directive.[3] Civil society groups have added that the UK Communications Act, which ensures the protection of freedom of speech, risks being undermined by the proposed legislation.
The Interim Regime
The Interim Regime will work to ensure that the UK upholds its obligations under the EU’s AVMSD until the Online Harms legislation is passed. As such, the Interim Regime applies to all UK VSPs. The EU updated the AVMSD, which governs Union-wide coordination of national legislation on audio-visual services (such as television broadcasts), in 2018 to include VSPs. It encourages Member States to ensure that VSPs operating under their jurisdiction comply with the requirements set out in the AVMSD, including preventing the dissemination of terrorist content. The European Commission has specified that VSP status primarily concerns platforms who either have the sharing of user-generated video content as its main purpose or as one of its core purposes, meaning that in theory the AVMSD could apply to social media platforms as well.
Similar to the feedback raised on the Online Harms White Paper, criticism raised by legal experts, civil society groups, and tech companies on the Interim Regime consolidate around the enforcement mechanisms that might lead to over-removal and potentially hinder competition and innovation as well as the lack of definitional clarity when it comes to defining harmful content, and particularly terrorist content.
However, Ofcom’s most recent guidance for VSPs specifies that its first priority is to work together with the VSPs to strengthen or implement new measures in order to comply with the interim regime in its “early regulatory phase”. In addition, Ofcom has provided guidance on how to determine proportionality between the action taken by a VSP and the level of harm of a particular piece of content. Ofcom stipulates that the size of the VSPs will be taken into account in both its proof of compliance as well as Ofcom’s enforcement mechanism. Whilst this guidance clarifies some of the new requirements put on VSPs, the guidance is likely to change throughout the early regulatory phase.
Tech Against Terrorism offered a response to Ofcom’s consultation process on the regulation of VSPs, which was concluded in September, which can be found here.
[1] Restricted material constitutes “videos which have or would be likely to have an R18 certificate, or which have been or would like be refused a certificate. It also means other material that might impair the physical, mental or moral development of persons under the age of 18”.
[2] UK Consultation Report.
[3] The UK only has existing obligations to the European directives for the duration of the Brexit negotiations; therefore, the legal concerns might become less relevant. However, whilst the UK might not have to fulfil the European directives, potential implications for freedom of speech and intermediary liability are still valid for post-Brexit Britain.
Resources
Article 19, 2019. Response to the Consultations on the White Paper on Online Harms
Global Network Initiative (2020), Content Regulation and Human Rights.
Human Rights Watch (2020). Social Media Platforms Remove Evidence War Crimes
Natasha Lomas (2019), UK Sets Out Safety-focused Plan to Regulate Internet Firms, Techcrunch
Osborne Clarke (2020), Online Harms Regulation | Clarity Awaited but Reforms Set to Be Delayed.
Tech Against Terrorism (2020), Summary Tech Against Terrorism's Response to Ofcom's Consultation Process on the Regulation of Video-Sharing Platforms.
UK government (2020), Online Harms White Paper - Initial Consultation Response
Vinous Ali, (2019) TechUK comments on the Government’s new Online Harms White Paper TechUK
- News (253)
- Counterterrorism (55)
- Analysis (52)
- Terrorism (39)
- Online Regulation (38)
- Violent Extremist (36)
- Tech Responses (35)
- Press Release (34)
- Regulation (33)
- Europe (31)
- Government Regulation (27)
- Academia (25)
- GIFCT (22)
- Reports (22)
- UK (22)
- US (20)
- USA (19)
- Guides (17)
- Law (16)
- UN (15)
- MENA (13)
- Asia (11)
- ISIS (11)
- Workshop (11)
- Presentation (10)
- Fintech (6)
- Far-right (5)
- Threat Intelligence (5)
- Webinar (5)
- Propaganda (3)
- Region (3)
- Submissions (3)
- Generative AI (2)
- Op-ed (1)
- Tech Against Terrorism Network (1)
Reader's Digest – 5 June
Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy. Special...