News

THE ONLINE REGULATION SERIES 2021 | CANADA (Update)

Written by Adam Southey | Nov 24, 2021 5:34:39 PM

November 2021 update to Canada’s regulatory framework

Canada’s approach to online regulation had, until 2020 supported tech sector self-regulation rather than government-led regulation. However, concerns over online hate speech and extremism, and online foreign interference in Canadian politics, have prompted the government to shift its approach. The 2020 report on “Canada’s Communications Future: Time to Act” (BTLR), outlined recommendations for a new legal framework for online platforms and content. In July 2021, the Ministry of Canadian Heritage[mfn]Former Minister of Canadian Heritage, Steven Guilbeault, was tasked in his 2019 mandate letter by Prime Minister Justin Trudeau to develop a new regulatory framework for social media. Following the September 2021 federal elections, Guilbeault was replaced by Pablo Rodriguez as Minister of Canadian Heritage. See: Boutilier Alex (2021), Trudeau’s heritage minister has a chance to reset social media regulations. Will he take it?, Global News. The Ministry of Canadian Heritage has responsibility for promoting “Canadian identity and values, cultural development, and heritage”. See: Government of Canada, Canadian Heritage: Raison d’être, mandate and role – Canadian Heritage.[/mfn] published a technical paper detailing the provisions of a future bill to counter harmful content on social media. The technical paper exhibits in those provisions several of the regulatory trends observed globally, including a short removal deadline for platforms to assess and act on reports of harmful content.

Please note that this entry focuses on the proposal for a bill on addressing harmful online content on communication platforms. The key takeaways and commentary below are based on this proposal. A more complete overview of Canada’s regulatory framework can be found in the Online Regulation Series Handbook and at the end of this blog.[mfn]Tech Against Terrorism published its first analysis of Canada’s online regulatory framework in October 2020. At the time, Prime Minister, Justin Trudeau, had already tasked Heritage Minister, Steven Guilbeault, with developing a new regulatory framework for social media; and the report on “Canada’s Communications Future: Time to Act”, January 2020, outlined our recommendations for social media regulation. Tech Against Terrorism provided a first update to its analysis of Canada’s online regulation in the first edition of the Online Regulation Series Handbook, published in July 2021. The Handbook update includes Canada’s announcement of its strategy to tackle online hate speech and other harms, including Bill C-36 which amended the Canadian Human Rights Act and the Criminal Code, as well as what was known at the time of the planned harmful content bill. See: Tech Against Terrorism (2020), The Online Regulation Series: Canada; Bill C-36: Act to amend the Criminal Code and the Canadian Human Rights Act and to make related amendments to another Act (hate propaganda, hate crimes and hate speech).[/mfn]

Proposal for a legislation on harmful content – Background

In July 2021, the Ministry of Canadian Heritage opened a consultation on its upcoming bill on addressing harmful online content.[mfn]The public consultation closed on 25 September 2021. See: https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content.html[/mfn] To provide details and background on the bill, Canada Heritage published with the consultation a technical paper outlining the future provisions of the bill, as well as a discussion guide summarising the key requirements of the future bill. The discussion guide explained that the bill was to be introduced in the autumn / fall of 2021; however, as of early November 2021, no major progress on the bill has been made.[mfn]The slowed down legislative process is likely to be related to the September 2021 snap elections. See: https://slate.com/technology/2021/11/internet-safety-vs-internet-freedom.html[/mfn]

The stated aim of the bill is to protect Canadians from harmful online content by requiring online communications platforms to block access to such content in Canada. The bill also includes provisions to facilitate the identification and investigation of threat actors by law enforcement by requiring tech companies to monitor their services for harmful content and share related information with the authorities.

In its presentation of the bill, the Canadian Government argued that it “prioritiz[es] a safe, open, and inclusive Internet” alongside a regulatory approach that “upholds and protects human rights, while also respecting fundamental freedoms, notably freedom of expression”. The bill is a key pillar of Canada’s strategy to “better protect Canadians from hate speech and online harms”. As part of this strategy, the Government also introduced bill C-36 amending the Canadian Human Rights Act and the Criminal Code to redefine hate speech and hatred, as well as to provide additional tools to prevent and remediate hate speech and hate crimes – see the Handbook entry below.

Legislative proposal concerning harmful content – key takeaways for tech companies

  • The bill will regulate “online communication services providers” (OCSPs), which are defined as services whose primary purpose is communication between users, and will be applicable to all online communication platforms providing services in Canada. The Canadian government specifically mentions platforms such as Facebook, Instagram, Twitter, YouTube, TikTok, and Pornhub in its discussion guide. Private communications and online services that do not qualify as “communication” (for instance “fitness applications or travel review websites”) are not in the scope of the bill. The technical paper also specifies that the government will be able to include or exclude certain categories of OCSPs from being covered by the bill.

  • The discussion paper which accompanies the bill specifies that the definitions of harmful content will be based on existing Canadian law, including the Criminal Code. However, the guide also states that the definitions will be modified to suit a regulatory purpose distinct from their normal application in the criminal law. To counter “the most egregious kinds of harmful content online”, the bill includes five categories of online content for tech platforms to block in Canada.
    1. Terrorist content – defined in the technical paper as “content that actively encourages terrorism and which is likely to result in terrorism.”
    2. Incitement to violence
    3. Hate speech
    4. Non-consensual sharing of intimate images
    5. Child sexual exploitation

  • The technical paper outlines key duties for OSCPs to fulfil to counter harmful content and ultimately render it inaccessible in Canada:
    • OSCPs are to actively monitor their services for the five categories of harmful content and block it in Canada. The technical paper specifies that the monitoring tools can include automated and algorithmic solutions.

    • Platforms need to implement a user reporting process for flagging harmful content. Platforms will have to review user reports, assess them according to the definitions of harmful content in the bill, and, within 24 hours, prevent access in Canada to content assessed to be harmful. Once a decision on the content has been made, the platform is to notify both the author and flagger of the content to give each the right to appeal the decision. To ensure that users can both report harmful content and appeal moderation decisions, platforms will have to introduce clear relevant mechanisms available to all users.

  • The timeframe for blocking certain content in Canada may be varied by statutory instrument - specifically by means of orders issued by the Governor in Council.[mfn]In Canada, the Governor in Council refers to the “Governor General acting by and with the advice of the Queen’s Privy Council for Canada”, in practice acting on advice given by the federal cabinet. See: https://www.constitutionalstudies.ca/2019/07/governor-in-council/[/mfn] It is likely that timeframes will be varied on the basis of the category of content.

  • The technical paper also outlines transparency requirements for platforms “to publish information that they do not currently publish”.
    • The bill would require OCSPs to publish clear community guidelines applicable to each of the five categories of harmful content, as well as information on content moderation.[mfn]Including information on policy development and enforcement, and data on volume and type of content dealt with at different steps of the moderation process.[/mfn]

    • The bill would also require platforms to publish Canada-specific transparency reports, including information on the moderation capacity dedicated to Canada and the impact of automated systems to moderate content in Canada, as well as information on the process and results of moderation enforcement in Canada.[mfn]Platforms would also have to include information pertaining to: the impact of automated systems to moderate and block access to harmful content in the country; the volume and type of harmful and generally violative content accessible from Canada; and the monetization of harmful content[/mfn]

  • The technical paper includes provisions on the sharing of information between OCSPs and law enforcement.[mfn]The Canadian government justifies the provisions on sharing information with law enforcement with the need for “appropriate investigative and preventive action”, and in particular to mitigate the risks of terrorists and violent extremists moving to encrypted and unmoderated platforms as a result of OCSPs’ moderation.[/mfn] Of particular note is the requirement for OCSPs to notify law enforcement and the Canadian Security and Intelligence Service (CSIS) of certain types of content. The discussion guide outlines two possible approaches for this reporting:
    • Non-obligatory requirement to report content in instances of “imminent” or “serious” harms, with the precise thresholds for reporting yet to be defined. The discussion guide states that under this approach platforms will have no obligation to report content, even “if noticeably illegal content is likely to lead to violence or terrorist activity”.

    • Mandatory reporting requirement for the five categories of harmful content, with the legal threshold likely to vary depending on the category of the content – for instance with a threshold for reporting terrorist and violent extremist content lower than for criminal hate speech.

  • OCSPs will also have to preserve certain information to support investigations. The types of information that can be preserved will be determined via regulation issued under the authority of the Governor in Council, and preservation requests will have to be issued by judicial authorities.

  • The technical paper outlines financial penalties and blocking sanctions for platforms that do not comply:
    • OSCPs failing to comply with certain provisions of the bill will face fines of either up to approx. $21 million[mfn]This is equivalent to 25 million Canadian dollars.[/mfn] or 5% of their gross global revenue – whichever is higher.

    • Platforms that “repeatedly demonstrate persistent non-compliance” with orders to remove terrorist and/or child sexual abuse content risk having access to their services blocked in Canada.

  • The bill will create a new “Digital Safety Commission of Canada”, an overarching commission supervising the three new oversight and enforcement bodies of the regulatory regime:

  1. The Digital Safety Commissioner of Canada, in addition to its supervisory function, will participate in research and stakeholder processes to reduce harmful content online.
    • The Commissioner’s office will be the principal assessor of compliance and will have the power to issue fines for non-compliance, and to petition the Canadian Federal Court to issue a filtering or blocking order to prevent access to an OCSP in Canada.

    • As part of the Commissioner’s oversight and enforcement powers, the technical paper includes the power for an inspector[mfn]The technical paper does not specify who can be considered an “inspector”[/mfn] to access any document or information (incl. computer algorithms and software) that could be relevant to assessing a platform’s compliance and to preventing non-compliance.[mfn]Compliance being with the “the Act, regulations, decisions and orders”[/mfn] More specifically, this power allows an inspector to “enter, at any reasonable time, any place in which they believe on reasonable grounds there is any document, information or any other thing, including computer algorithms and software [relevant to an investigation]”

    • The Commissioner will also publish public reports on tech companies’ compliance with the law.

    • OCSPs will be able to consult the Commissioner for general advice on moderation practices related to harmful content, though OCSPs will not be allowed to seek advice on specific decisions.

  1. The Digital Recourse Council of Canada, which will consist of three to five members, will act as a last resort appeal process for users having exhausted all appeal processes offered by the OCSP itself. The Recourse Council’s decision on whether the content is harmful or not will be binding,[mfn]If the Recourse Council assesses the content to be harmful, the platform will have to block it in Canada. If the Recourse Council finds the content not to be harmful, the platform will have the possibility to moderate it based on its own Community Guidelines.[/mfn] and the Recourse Council is to publicly report on its decisions to ensure a transparent process.
    • The Recourse Council will be made of three to five members appointed by the Governor in Council,

  1. An Advisory Board will be created to provide the Commissioner and the Recourse Council with expert advice on the tech sector’s content moderation practices and related emerging trends. The Advisory Board will have up to 7 members appointed by the Minister.

  • The bill will also include a provision on the Christchurch Call to Action’s Incident Response Protocol, concerning terrorist content linked to real-world attacks in Canada and abroad. Regulation on the specifics of the Incident Response Protocol will be undertaken by statutory instrument.

  • The bill will amend Canada’s legal framework on child sexual abuse material, as well as the Canadian Security and Intelligence Service Act (CSIS Act) to streamline judicial process to obtain information on online threat actors.
    • Regarding violent extremist actors, the amendment to the CSIS Act, would authorise CSIS to identify violent extremists and mitigate the threat they represent more swiftly. Authorisation would be by a Federal judge and ministers would remain accountable for the overall process.

Legislative proposal concerning harmful content – Tech Against Terrorism’s commentary

Canada’s proposed formal framework would mandate tech companies to monitor and remove certain harmful content, and therefore represents a significant shift in the government’s approach to online regulation away from self-regulation and cross-sector initiatives. This shift reflects the recommendations laid out in the 2020 BTLR report (see Handbook entry below), which called for a “legislation with respect to liability of digital providers for harmful content and conduct using digital technologies” to counter the spread and amplification of harmful content.[mfn]A term undefined in the BTLR, the scope of harmful content to be addressed by tech companies appears to be limited to five categories of illegal content: “Hate speech, terrorist content, content that incites violence, child sexual exploitative content and non-consensual sharing of intimate content.” Elghawaby Amira (2021), Canada is Bringing in New Legislation to Stop the Spread of Online Hate. Here’s how it can work., Press Progress.[/mfn]

The Canadian Government’s plan to regulate social media content has been criticised by digital right experts and by Canada’s political opposition.[mfn]Changes to the regulatory framework are proposed by the current Liberal government of Prime Minister Justin Trudeau and have been opposed by the country’s political opposition parties, in particular by the Conservatives.[/mfn] Critics of the proposed bill have focused in particular on, firstly, the risks of the bill inadvertently facilitating an online surveillance system, and secondly on the bill’s potentially negative impact on freedom of expression.

A melting-pot of regulatory trends

Canada’s proposed regulation on harmful content exhibits almost all key global regulatory trends identified in the first edition of the Online Regulation Series Handbook, including:[mfn]For more information on regulatory trends identified by Tech Against Terrorism, please see Section 1, “The State of Online Regulation” (pp. 13-29) in the first edition of the Online Regulation Series Handbook.[/mfn]

  • Lack of consideration for smaller platforms

  • Short removal deadlines

  • Increased reliance on automated moderation

  • Outsourcing legal adjudication

  • Transparency requirements

The presence of these regulatory trends is partly explained by the Canadian government having consulted with other governments that have passed or are in the process of passing laws on illegal and harmful online content, including Germany, Australia and France[mfn]Gowling WLG (2021), Online harms: Federal government announces new rules and regulator, Lexology.com[/mfn] which have all passed regulations on countering harmful content online in recent years. The similarities between Canada’s proposed bill and online regulation elsewhere have been highlighted (and criticised) by tech policy and legal experts, including the UN Special Rapporteur on Freedom of Expression and digital rights advocates, for potentially encouraging over-removal of content and for presenting an adverse risk to the freedom of expression online. The Electronic Frontier Foundation (EFF) highlights the similarities with the most stringent requirements of France’s cyberhate law which were struck down by France Constitutional Council. [mfn]McSherry Corynne and Rodriguez Katitza (2021), O (No!) Canada: Fast-Moving Proposal Creates Filtering, Blocking and Reporting Rules—and Speech Police to Enforce Them, Electronic Frontier Foundation. Regarding France’s cyberhate law and its censuring by the Constitutional Council, see our analysis of France’s regulatory framework in the Online Regulation Series Handbook (pp.84 -90)[/mfn]

By introducing a 24-hour removal requirement, the bill mirrors similar short removal deadlines passed in Germany, Turkey, and at the EU level. However, as platform regulation expert Daphne Keller notes in her analysis of the proposal, Canada’s bill would be particularly stringent in requiring platforms to both correctly assess the harmfulness, and therefore potential illegality, of the content and remove it within 24 hours.[mfn]Germany however allows seven days to assess content that is not clearly illegal, whereas the one- and four-hour removal requirements in the EU and Turkey respectively only apply to orders issued by competent authorities.[/mfn] As Tech Against Terrorism and other tech policy experts have cautioned in connection with other regulation, the combination of short removal deadlines with hefty fines are likely to encourage tech companies to err on the side of over-removal to avoid penalties and therefore to remove content that is neither harmful nor illegal.

The requirement to use automated tools to monitor online platforms for harmful content is also present in other regulation passed since 2017. Pakistan and India include similar requirements, whereas EU and German regulations incentivise platforms to use automated tools. However, Canada’s provision on automated tools stands out by the breadth of its application, by effectively requiring tech companies to monitor their services systematically for the five categories of harmful content. Amongst the different regulations which have been analysed in the Online Regulation Series since 2020, Pakistan’s 2020 Social Media Rules are the most similar to Canada’s proposal in breadth of scope in its requirement for tech companies to prevent the uploading and livestreaming of terrorist and extremist content, hate speech, and content inciting violence.

Experts Daphne Keller and Michael Geist[mfn]See: Geist Michael (2021), Submission to the Government of Canada Consultation on the Proposed Approach to Address Harmful Content Online; and Failure to Balance Freedom of Expression and Protection from Online Harms: My Submission to the Government’s Consultation on Addressing Harmful Content Online.[/mfn] have raised concerns about the stipulation of automated tools which, coupled with the provision on sharing information with law enforcement, risks impacting communities marginalised and discriminated against in society.[mfn]See: Keller Daphne (2021), Five Big Problems with Canada’s Proposed Regulatory Framework for “Harmful Online Content, Tech Policy Press; and Hatfield Matt (2021), A First Look at Canada’s Harmful Content proposal, Open Media.[/mfn] According to Keller and Geist, the bill risks a disproportionate impact on communities affected by the societal bias reflected in automated and filtering tools, and having “a disparate impact on those individuals and communities who already face structural oppression in the criminal justice system.”[mfn]Bennett Owen (2021), Mozilla suggests improvements to Canada’s online harms agenda, Mozilla Blog.[/mfn] The requirement to share information with law enforcement has also been criticised, with Keller highlighting that such requirements “effectively deputizes platforms to invade users’ privacy and free expression rights in ways that the government, acting alone, cannot” and is “unprecedented” in democratic countries.[mfn]Keller does note that a similar reporting rule had been introduced in Germany in 2021 as well, but that this rule is currently being challenged[/mfn]

Uncertainties as to the scope of the law in practice

Whilst Tech Against Terrorism recognises that the technical paper published by the Canadian Government is not the law in its final form, the principles of flexibility and breadth inherent in the bill as presented by the technical paper create uncertainty around what the law will mean in practice and the scope of its future application, and in particular around the different powers of “regulation” delegated to the executive branch of the Canadian Government.

The discussion guide explains that the law is mainly targeted at major tech companies offering online communications services in Canada – the likes of Facebook, TikTok and Pornhub are specifically mentioned – whilst excluding private communications. However, there is no threshold to delimit the scope of the law’s application by platform size, whether in terms of user-base or available resources. The bill also lacks a precise definition of what constitutes “private communications”, and therefore cannot clearly state which platforms will be excluded from the obligation to monitor their services for harmful content.

As EFF noted in its criticism of the law, there is also a risk that legislators may decide that private chat groups with an important number of participants are public in nature. Private communications, including those protected by end-to-end encryption, would thus fall within scope of the law’s requirement for platforms to monitor harmful content. This is an approach previously adopted in other countries which undermines E2EE and the online privacy and security it guarantees.[mfn]On the risks to online security and privacy presented by requirements to monitor E2EE-protected communications, see Tech Against Terrorism’s report on “Terrorist Use of E2EE: State of Play, Misconceptions, and Mitigation Strategies”.[/mfn]

The definition of terrorist content in the technical paper is circular. It defines terrorist content as content that “actively encourage terrorism and which is likely to result in terrorism”.[mfn]A similar criticism of a circular definition of terrorist and harmful content was raised by Tech Against Terrorism with regard to the draft Online Safety Bill (OSB) in the UK. See our entry on the UK regulatory framework in the Online Regulation Series Handbook (pp.107-116), and our submission to the UK draft OSB consultation.[/mfn] This, coupled with the requirements for platforms to assess whether content is harmful and therefore illegal under the bill, effectively delegates the adjudication of what is harmful or illegal to private tech companies when limits to freedom of expression should be adjudicated by independent judicial authorities in line with international human right standards. This delegated responsibility means that platforms are likely to err on the side of caution by over-removing content, possibly removing legal and non-harmful material in the process, to avoid being sanctioned.

The technical paper also specifies that specifications may be added to the law by statutory instrument, by which means it may also be modified– for instance, by amending the removal deadline depending on the category of content. The breadth and flexibility of the law is evident in such provisions, which risk a constant expansion of executive power in Canada.[mfn]McSherry and Rodriguez (2021)[/mfn]

“Sweeping” oversight powers

Keller and Geist, as well as other tech policy and legal experts, have both criticised the proposed bill for creating a hefty bureaucratic and regulatory structure with what they call “sweeping regulatory powers”.[mfn]Keller (2021)[/mfn] The Commissioner’s power to grant an investigator access to any document, anywhere and at any time (if the document is believed to be relevant to content moderation) has been criticised for being too intrusive and a “glaring restraint problem” for freedom of expression according to Daphne Keller.[mfn]Keller (2021)[/mfn]

Experts have also criticised the Commissioner’s proposed authority to petition the federal court to block access to a non-compliant OSCP failing.[mfn]Stevens Yuan (2021), To protect our privacy and free speech, Canada needs to overhaul its approach to regulating online harms, The Conversation[/mfn] Provisions to block access to platforms are rarely included in online regulation, with one notable exception being the Ministerial Regulation Number in Indonesia, which grants the Indonesian authorities the power to block access to non-compliant platforms.

The Online Regulation Series 2020 | Canada

Canada’s regulatory framework:

  • Canada’s Communications Future: Time to Act (BTLR), January 2020, is a broad review of the broadcasting and telecommunications legislation in Canada, drawing recommendations for the future of the legislative framework in the country, and calling for the introduction of social media regulation.

  • Canada’s Digital Charter, 2019, lays out Canada’s approach to internet technologies and the online space; with the 9th principle addressing the issue of violent extremism, and underlining that the online space should be “free from hate and violent extremism”.

  • Digital Citizen Initiative, Canada’s strategy for the building “resilience against online disinformation and […] support a healthy information system”, focused on research and “citizen” activities.

  • In January 2021, Heritage Minister Steven Guilbeault announced that a new regulatory framework for tech platforms is expected be introduced in 2021 in the House of Commons. This intention was set out in the Mandate Letter from Prime Minister Justin Trudeau, which tasked Minister Guilbeault with developing a new regulatory framework for social media: “starting with a requirement that all platforms remove illegal content, including hate speech, within 24 hours or face significant penalties. This should include other online harms such as radicalization, incitement to violence, exploitation of children, or creation or distribution of terrorist propaganda.”

  • In June 2021, the government of Canada announced new regulatory measures to “better protect Canadians from hate speech and online harms”. The initiative – introduced by the Department of Justice, the Department of Canadian Heritage, and Public Safety Canada – aims to tackle “the most extreme and harmful speech” both online and offline. The proposed legislation will amend the Canadian Human Rights Act, the Criminal Code, and the Youth Criminal Justice Act to redefine hate speech and hatred. These amendments will also provide additional tools to prevent and offer remedies to hate speech and hate crimes. A definition of “hatred” will be added to section 319 of the Criminal Code. In addition, the government of Canada will introduce legislation to tackle harmful content online. This legislation will cover terrorist and hate speech content, as well as content inciting to violence.

  • Bill C-10 amending the Broadcasting Act, 2021: In 2019, policymakers began discussing Bill C-10 to amend the existing Broadcasting Act and allow for the Canadian Radio-Television and Telecommunications Commission (CRTC) to regulate online streaming services and promote Canadian content. At the time of writing, Bill C-10 is still being discussed in Canada, and whether it will cover user-generated content and how remains unclear.[mfn]Originally, social media content was to be excluded from Bill C-10, which focused on broadcasting and streaming platforms. However, the first quarter of 2021 was marked by heightened discussion in Canada regarding Bill C-10’s scope of application, with the Heritage committee’s (in charge of the drafting bill) decision to remove a clause exempting user-generated video content from the Bill. Following this, an amendment was added with the current version of the Bill, as of May 2021, stating that Bill’s scope of applicability over social media content is limited to promote the discoverability of content by Canadian Creators. See: Karadeglija Anja (2021a), Bill C-10 amendment that would exempt social media content from regulation voted down, The National Post.[/mfn]

  • Bill C-10 is the first of a set of legislative frameworks meant at increasing government-led regulation of online platforms and content in the country, paving the way for the possibility to mandate content moderation policies and process.

Relevant national bodies:

  • Canadian Heritage, which oversees the Digital Citizen Initiative and the drafting of the upcoming regulation for online platforms.

Key takeaways for tech platforms:

  • Tech platforms are exempt from liability for user-generated content.

  • Canada has favoured a self-regulatory approach to moderation of online content and speech, engaging in cross-sector initiatives to support the tech sector in countering terrorist and violence extremist use of the internet.

  • The Canada’s Communications Future: Time to Act (2020), known as BTLR, offers a blueprint for regulating online content in the country, calling for tech companies to be held liable for harmful content on their platforms.[mfn]At the time of writing, there are still uncertainties about whether the recommendations made in the BTLR are to become laws in Canada.[/mfn]