November 2021 update to Canada’s regulatory framework
Canada’s approach to online regulation had, until 2020 supported tech sector self-regulation rather than government-led regulation. However, concerns over online hate speech and extremism, and online foreign interference in Canadian politics, have prompted the government to shift its approach. The 2020 report on “Canada’s Communications Future: Time to Act” (BTLR), outlined recommendations for a new legal framework for online platforms and content. In July 2021, the Ministry of Canadian Heritage[mfn]Former Minister of Canadian Heritage, Steven Guilbeault, was tasked in his 2019 mandate letter by Prime Minister Justin Trudeau to develop a new regulatory framework for social media. Following the September 2021 federal elections, Guilbeault was replaced by Pablo Rodriguez as Minister of Canadian Heritage. See: Boutilier Alex (2021), Trudeau’s heritage minister has a chance to reset social media regulations. Will he take it?, Global News. The Ministry of Canadian Heritage has responsibility for promoting “Canadian identity and values, cultural development, and heritage”. See: Government of Canada, Canadian Heritage: Raison d’être, mandate and role – Canadian Heritage.[/mfn] published a technical paper detailing the provisions of a future bill to counter harmful content on social media. The technical paper exhibits in those provisions several of the regulatory trends observed globally, including a short removal deadline for platforms to assess and act on reports of harmful content.
Please note that this entry focuses on the proposal for a bill on addressing harmful online content on communication platforms. The key takeaways and commentary below are based on this proposal. A more complete overview of Canada’s regulatory framework can be found in the Online Regulation Series Handbook and at the end of this blog.[mfn]Tech Against Terrorism published its first analysis of Canada’s online regulatory framework in October 2020. At the time, Prime Minister, Justin Trudeau, had already tasked Heritage Minister, Steven Guilbeault, with developing a new regulatory framework for social media; and the report on “Canada’s Communications Future: Time to Act”, January 2020, outlined our recommendations for social media regulation. Tech Against Terrorism provided a first update to its analysis of Canada’s online regulation in the first edition of the Online Regulation Series Handbook, published in July 2021. The Handbook update includes Canada’s announcement of its strategy to tackle online hate speech and other harms, including Bill C-36 which amended the Canadian Human Rights Act and the Criminal Code, as well as what was known at the time of the planned harmful content bill. See: Tech Against Terrorism (2020), The Online Regulation Series: Canada; Bill C-36: Act to amend the Criminal Code and the Canadian Human Rights Act and to make related amendments to another Act (hate propaganda, hate crimes and hate speech).[/mfn]
Proposal for a legislation on harmful content – Background
In July 2021, the Ministry of Canadian Heritage opened a consultation on its upcoming bill on addressing harmful online content.[mfn]The public consultation closed on 25 September 2021. See: https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content.html[/mfn] To provide details and background on the bill, Canada Heritage published with the consultation a technical paper outlining the future provisions of the bill, as well as a discussion guide summarising the key requirements of the future bill. The discussion guide explained that the bill was to be introduced in the autumn / fall of 2021; however, as of early November 2021, no major progress on the bill has been made.[mfn]The slowed down legislative process is likely to be related to the September 2021 snap elections. See: https://slate.com/technology/2021/11/internet-safety-vs-internet-freedom.html[/mfn]
The stated aim of the bill is to protect Canadians from harmful online content by requiring online communications platforms to block access to such content in Canada. The bill also includes provisions to facilitate the identification and investigation of threat actors by law enforcement by requiring tech companies to monitor their services for harmful content and share related information with the authorities.
In its presentation of the bill, the Canadian Government argued that it “prioritiz[es] a safe, open, and inclusive Internet” alongside a regulatory approach that “upholds and protects human rights, while also respecting fundamental freedoms, notably freedom of expression”. The bill is a key pillar of Canada’s strategy to “better protect Canadians from hate speech and online harms”. As part of this strategy, the Government also introduced bill C-36 amending the Canadian Human Rights Act and the Criminal Code to redefine hate speech and hatred, as well as to provide additional tools to prevent and remediate hate speech and hate crimes – see the Handbook entry below.
Legislative proposal concerning harmful content – key takeaways for tech companies
Legislative proposal concerning harmful content – Tech Against Terrorism’s commentary
Canada’s proposed formal framework would mandate tech companies to monitor and remove certain harmful content, and therefore represents a significant shift in the government’s approach to online regulation away from self-regulation and cross-sector initiatives. This shift reflects the recommendations laid out in the 2020 BTLR report (see Handbook entry below), which called for a “legislation with respect to liability of digital providers for harmful content and conduct using digital technologies” to counter the spread and amplification of harmful content.[mfn]A term undefined in the BTLR, the scope of harmful content to be addressed by tech companies appears to be limited to five categories of illegal content: “Hate speech, terrorist content, content that incites violence, child sexual exploitative content and non-consensual sharing of intimate content.” Elghawaby Amira (2021), Canada is Bringing in New Legislation to Stop the Spread of Online Hate. Here’s how it can work., Press Progress.[/mfn]
The Canadian Government’s plan to regulate social media content has been criticised by digital right experts and by Canada’s political opposition.[mfn]Changes to the regulatory framework are proposed by the current Liberal government of Prime Minister Justin Trudeau and have been opposed by the country’s political opposition parties, in particular by the Conservatives.[/mfn] Critics of the proposed bill have focused in particular on, firstly, the risks of the bill inadvertently facilitating an online surveillance system, and secondly on the bill’s potentially negative impact on freedom of expression.
A melting-pot of regulatory trends
Canada’s proposed regulation on harmful content exhibits almost all key global regulatory trends identified in the first edition of the Online Regulation Series Handbook, including:[mfn]For more information on regulatory trends identified by Tech Against Terrorism, please see Section 1, “The State of Online Regulation” (pp. 13-29) in the first edition of the Online Regulation Series Handbook.[/mfn]
The presence of these regulatory trends is partly explained by the Canadian government having consulted with other governments that have passed or are in the process of passing laws on illegal and harmful online content, including Germany, Australia and France[mfn]Gowling WLG (2021), Online harms: Federal government announces new rules and regulator, Lexology.com[/mfn] which have all passed regulations on countering harmful content online in recent years. The similarities between Canada’s proposed bill and online regulation elsewhere have been highlighted (and criticised) by tech policy and legal experts, including the UN Special Rapporteur on Freedom of Expression and digital rights advocates, for potentially encouraging over-removal of content and for presenting an adverse risk to the freedom of expression online. The Electronic Frontier Foundation (EFF) highlights the similarities with the most stringent requirements of France’s cyberhate law which were struck down by France Constitutional Council. [mfn]McSherry Corynne and Rodriguez Katitza (2021), O (No!) Canada: Fast-Moving Proposal Creates Filtering, Blocking and Reporting Rules—and Speech Police to Enforce Them, Electronic Frontier Foundation. Regarding France’s cyberhate law and its censuring by the Constitutional Council, see our analysis of France’s regulatory framework in the Online Regulation Series Handbook (pp.84 -90)[/mfn]
By introducing a 24-hour removal requirement, the bill mirrors similar short removal deadlines passed in Germany, Turkey, and at the EU level. However, as platform regulation expert Daphne Keller notes in her analysis of the proposal, Canada’s bill would be particularly stringent in requiring platforms to both correctly assess the harmfulness, and therefore potential illegality, of the content and remove it within 24 hours.[mfn]Germany however allows seven days to assess content that is not clearly illegal, whereas the one- and four-hour removal requirements in the EU and Turkey respectively only apply to orders issued by competent authorities.[/mfn] As Tech Against Terrorism and other tech policy experts have cautioned in connection with other regulation, the combination of short removal deadlines with hefty fines are likely to encourage tech companies to err on the side of over-removal to avoid penalties and therefore to remove content that is neither harmful nor illegal.
The requirement to use automated tools to monitor online platforms for harmful content is also present in other regulation passed since 2017. Pakistan and India include similar requirements, whereas EU and German regulations incentivise platforms to use automated tools. However, Canada’s provision on automated tools stands out by the breadth of its application, by effectively requiring tech companies to monitor their services systematically for the five categories of harmful content. Amongst the different regulations which have been analysed in the Online Regulation Series since 2020, Pakistan’s 2020 Social Media Rules are the most similar to Canada’s proposal in breadth of scope in its requirement for tech companies to prevent the uploading and livestreaming of terrorist and extremist content, hate speech, and content inciting violence.
Experts Daphne Keller and Michael Geist[mfn]See: Geist Michael (2021), Submission to the Government of Canada Consultation on the Proposed Approach to Address Harmful Content Online; and Failure to Balance Freedom of Expression and Protection from Online Harms: My Submission to the Government’s Consultation on Addressing Harmful Content Online.[/mfn] have raised concerns about the stipulation of automated tools which, coupled with the provision on sharing information with law enforcement, risks impacting communities marginalised and discriminated against in society.[mfn]See: Keller Daphne (2021), Five Big Problems with Canada’s Proposed Regulatory Framework for “Harmful Online Content, Tech Policy Press; and Hatfield Matt (2021), A First Look at Canada’s Harmful Content proposal, Open Media.[/mfn] According to Keller and Geist, the bill risks a disproportionate impact on communities affected by the societal bias reflected in automated and filtering tools, and having “a disparate impact on those individuals and communities who already face structural oppression in the criminal justice system.”[mfn]Bennett Owen (2021), Mozilla suggests improvements to Canada’s online harms agenda, Mozilla Blog.[/mfn] The requirement to share information with law enforcement has also been criticised, with Keller highlighting that such requirements “effectively deputizes platforms to invade users’ privacy and free expression rights in ways that the government, acting alone, cannot” and is “unprecedented” in democratic countries.[mfn]Keller does note that a similar reporting rule had been introduced in Germany in 2021 as well, but that this rule is currently being challenged[/mfn]
Uncertainties as to the scope of the law in practice
Whilst Tech Against Terrorism recognises that the technical paper published by the Canadian Government is not the law in its final form, the principles of flexibility and breadth inherent in the bill as presented by the technical paper create uncertainty around what the law will mean in practice and the scope of its future application, and in particular around the different powers of “regulation” delegated to the executive branch of the Canadian Government.
The discussion guide explains that the law is mainly targeted at major tech companies offering online communications services in Canada – the likes of Facebook, TikTok and Pornhub are specifically mentioned – whilst excluding private communications. However, there is no threshold to delimit the scope of the law’s application by platform size, whether in terms of user-base or available resources. The bill also lacks a precise definition of what constitutes “private communications”, and therefore cannot clearly state which platforms will be excluded from the obligation to monitor their services for harmful content.
As EFF noted in its criticism of the law, there is also a risk that legislators may decide that private chat groups with an important number of participants are public in nature. Private communications, including those protected by end-to-end encryption, would thus fall within scope of the law’s requirement for platforms to monitor harmful content. This is an approach previously adopted in other countries which undermines E2EE and the online privacy and security it guarantees.[mfn]On the risks to online security and privacy presented by requirements to monitor E2EE-protected communications, see Tech Against Terrorism’s report on “Terrorist Use of E2EE: State of Play, Misconceptions, and Mitigation Strategies”.[/mfn]
The definition of terrorist content in the technical paper is circular. It defines terrorist content as content that “actively encourage terrorism and which is likely to result in terrorism”.[mfn]A similar criticism of a circular definition of terrorist and harmful content was raised by Tech Against Terrorism with regard to the draft Online Safety Bill (OSB) in the UK. See our entry on the UK regulatory framework in the Online Regulation Series Handbook (pp.107-116), and our submission to the UK draft OSB consultation.[/mfn] This, coupled with the requirements for platforms to assess whether content is harmful and therefore illegal under the bill, effectively delegates the adjudication of what is harmful or illegal to private tech companies when limits to freedom of expression should be adjudicated by independent judicial authorities in line with international human right standards. This delegated responsibility means that platforms are likely to err on the side of caution by over-removing content, possibly removing legal and non-harmful material in the process, to avoid being sanctioned.
The technical paper also specifies that specifications may be added to the law by statutory instrument, by which means it may also be modified– for instance, by amending the removal deadline depending on the category of content. The breadth and flexibility of the law is evident in such provisions, which risk a constant expansion of executive power in Canada.[mfn]McSherry and Rodriguez (2021)[/mfn]
“Sweeping” oversight powers
Keller and Geist, as well as other tech policy and legal experts, have both criticised the proposed bill for creating a hefty bureaucratic and regulatory structure with what they call “sweeping regulatory powers”.[mfn]Keller (2021)[/mfn] The Commissioner’s power to grant an investigator access to any document, anywhere and at any time (if the document is believed to be relevant to content moderation) has been criticised for being too intrusive and a “glaring restraint problem” for freedom of expression according to Daphne Keller.[mfn]Keller (2021)[/mfn]
Experts have also criticised the Commissioner’s proposed authority to petition the federal court to block access to a non-compliant OSCP failing.[mfn]Stevens Yuan (2021), To protect our privacy and free speech, Canada needs to overhaul its approach to regulating online harms, The Conversation[/mfn] Provisions to block access to platforms are rarely included in online regulation, with one notable exception being the Ministerial Regulation Number in Indonesia, which grants the Indonesian authorities the power to block access to non-compliant platforms.
The Online Regulation Series 2020 | Canada
Canada’s regulatory framework:
Relevant national bodies:
Key takeaways for tech platforms: