News

The Online Regulation Series | Insights from Academia I

Written by Claudia Wagner | Nov 23, 2020 3:48:15 PM

You can access the ORS Handbook here

In this post, we look at academic analysis of global efforts to regulate online content and speech.

Key takeaways:

  • Academics agree that global regulation of online speech has changed drastically over the past two decades, and that there has been a sharp increase in regulatory efforts over the past four years.
  • Generally, academics agree that there is a need for improved regulatory measures to create a healthier online environment.
  • Overall, academics are concerned that current regulatory efforts and proposals do not account for how content moderation works in practice and risk having a negative impact on freedom of expression, the rule of law, and ultimately serve tech company interests rather than producing accountability.
  • Background: evolution of content moderation

    Academic research demonstrates that online regulation has drastically evolved since the emergence of the internet. Whilst big tech companies initially had rudimentary moderation guidelines,[1] most of them now have intricate moderation policies and mechanisms in place. Due to the fact that the most dominant global speech platforms were founded in the United States, the online speech landscape has largely been shaped by US First Amendment thinking[2]. However, academics highlight that this is rapidly changing.

    Jonathan Zittrain has, in the context of analysing digital governance generally, divided the period since the emergence of the internet into three eras:

  • The rights era, in which users’ right to expression was prioritised by tech companies and largely accepted by the public, with objectionable content seen as a price to pay for the democratised speech culture that the internet afforded.
  • The public health era, which saw companies shift towards an approach weighing the risks and benefits of allowing certain material – such as terrorist content or incitement to violence – which inevitably led to restrictions of speech on platforms.
  • The process era, in which Zittrain says the digital governance field requires “new institutional relationships” that can account for the fact that not all views will or can be reconciled, but also allows for an accountable process in which such differences are settled.
  • Evelyn Douek has developed on this, focusing on content moderation specifically. She describes the first era as “posts-as-trumps” where “the First Amendment’s categorical and individualistic” take on speech adjudication allowed for users to “post what they wanted.” Since this is no longer seen as tenable due to these policies allowing potentially harmful speech, large platforms have adopted a proportionality approach which acknowledges that free speech should be restricted in certain cases. Douek highlights that this is the dominant form of rights adjudication outside of the United States. Further, Douek argues that since content moderation is “impossible” to get perfectly right, tech companies should focus on probability. Tech companies and lawmakers alike should accept that platforms will make errors, and focus on deciding what type of errors are acceptable to produce a healthy online environment. This type of probabilistic enforcement is, according to Douek, the best solution between the extremes of “severely limiting speech or letting all the posts flow”.

    Platforms as de facto regulators

    Academics show that regulation has, prior to the recent regulatory push, been mainly outsourced to tech companies, something which coincided with platforms taking more of a “public health” or proportionality approach to moderation. Klonick has described the larger tech companies as the “New Governors” – bodies that “sit between the state, speakers, and publishers”, and are able to empower individual users and publishers”.

    Whilst academics disagree over the extent to which governments have spurred this trend, there is general agreement that governments, until recently, have been content to let platforms act as de facto regulators. Keller, Douek, and Danielle Citron all highlight this, noting that governments have “outsourced” policing of the internet for illegal or “harmful” content to tech platforms, something which Jack Balkin in 2014 labelled “collateral censorship.” All have raised the potential downsides with what they see as a lack of accountability with this model.

    Terrorist use of the internet and terrorist content has not been an exception to this rule. On the contrary, several of the mechanisms that scholars note have contributed to the “platforms as regulators” trend aim at quelling terrorist or extremist content online. Citron has highlighted the potential negative implications of this. Examining the European Union’s (EU) engagement with tech platforms to tackle hate speech and extremist content, Citron argues that the EU has – via a combination of introducing voluntary industry efforts and “threats” of regulation – made tech companies become arbiters of extremist speech. According to Citron, this in turn leads to legal content being removed, something she calls “censorship creep”. So-called Internet Referral Units (IRUs)[3] are often included by academics as part of this trend as well.

    Academics also see some of the industry collaborative initiatives that have been created to tackle various illegal and harmful content, such as child sexual exploitation and terrorist content, as a result of government outsourcing. Douek has criticised such industry coalitions – including the Global Internet Forum to Counter Terrorism (GIFCT) – which she calls “content cartels”, for their lack of accountability and transparency (more about this in our piece on tech sector initiatives).

    Government led regulation on the rise

    However, as this series has shown, in recent years regulation aimed at stifling illegal or harmful online content has begun to emerge across several jurisdictions. Academics note that terrorist use of the internet, and particularly terrorist content, is at the forefront of many such regulatory efforts. Some of the landmark regulatory proposals[4] that we have covered in this series have a strong or at least partial focus on terrorist content. This is not surprising, given the seriousness of the threat. However, Daphne Keller has – in a podcast episode with us at Tech Against Terrorism – noted that there is an absence of terrorism experts in online regulation endeavours, and has warned that this leads to misguided policy proposals that risk having limited effect in terms of actually tackling terrorism and terrorist use of the internet.

    It is worth examining what patterns that academics have identified across the regulation introduced in the last few years. Broadly, scholars have identified the following trends:

  • Legal liability shields are being removed, made conditional, and questioned
  • Removal deadlines, and fines for failing to meet them, are frequently introduced to expedite content removal
  • Mandating the removal of “harmful” material, despite its legality, is increasingly included in legislation, sometimes by assessment against company Terms of Service
  • Increasingly, governments are requesting that tech platforms carry out the extraterritorial enforcement of national law
  • Duty-of-care models, in which regulators aim to encourage systemic change in tackling illegal and harmful speech, are increasingly investigated as options by lawmakers
  • Outsourcing of adjudication on content’s legality to tech companies is still pursued by governments, however now by introducing such mechanisms in law
  • Questioning of intermediary liability shields

    The perhaps most consequential change that global regulation has touched upon is that of legal liability for tech platforms, something which they have been exempted from in the US, Europe, and various other local jurisdictions for more than two decades. Several regulations propose a move away from the current scheme under which platforms are not held legally liable for what users post on their platforms. Zittrain notes that this is not new, as intermediary liability is historically where “the most significant regulatory battles have unfolded.”

    There is general academic consensus that removing legal liability shields is concerning, particularly due to censorship concerns. As both Keller and Tiffany Li note, the two-decade long track record of intermediary liability laws indicate that when shields are removed, platforms will almost always err on the side of removal. However, that does not mean that academics agree that the current scheme is flawless, with some arguing that laws like Section 230 might need to change to encourage “improved” content moderation amongst tech companies (more in this in our next blogpost).

    Removal deadlines

    Academics have noted an increase in removal deadlines in global regulation. Such deadlines compel companies to remove illegal or harmful content within a specified timeframe.[5] Failure to comply with such deadlines usually result in financial penalties. David Kaye, former UN Special Rapporteurs on Freedom of Expression, and Fionnuala Ni Aolain, the UN Special Rapporteur on Counter Terrorism and Human Rights (both of whom are academics specialising in human rights law) have warned that such short timelines will not give platforms enough time to assess content’s legality, and might therefore lead to platforms removing legal content to avoid penalties.

    Further, Douek has questioned the efficacy of punitive measures that focusses on individual cases (such as failure to remove content within a given timeframe). Douek argues that this will create “bad incentive problems” and will give more weight to platforms’ own interests (in this case avoiding fines) rather than providing meaningful accountability. Secondly, Douek argues that removal deadlines are based on an overly optimistic belief in automated content removal tools, and that such requirements are essentially an error choice in which platforms will choose to err on the side of removal, whereas lawmakers seem to believe that platforms can remove “the bad without the good.”

    Mandating removal of “harmful” content

    Academics have also highlighted, mostly with concern, the introduction of legislation that targets “harmful” content. The reason academics, as well as human rights activists, are concerned is the fact that “harmful” is rarely precisely defined and that several categories of potentially “harmful” speech that might be legal, and that introducing laws compelling companies to remove such content will result in the removal of legal speech.

    Several academics have flagged that governments sometimes base such removal requests on company ToS. As Li notes, removing content via company Terms of Service (ToS) is often faster than going through a formal legal process. Furthermore, company ToS are often far more expansive in the “harms” they prohibit compared to national legislation. This is not surprising. As Klonick points out, companies often need to be more restrictive than national legislation out of “necessity to meet users’ norms for economic viability.” However, government leveraging of private companies’ speech policies may have negative consequences with regards to the rule of law and accountable process. Keller has, when writing about the proposed EU regulation on online terrorist content, referred to this as the “rule of TOS”, and has warned that it might lead to governments “exporting” national speech restrictions across the EU.

    Extraterritorial enforcement of national law

    Scholars note that whilst the largest tech companies have, due to their founding in the US, initially shaped their content standards on First Amendment norms, this approach has had to be adapted to match global audiences. Kate Klonick highlights how Facebook, YouTube, and Twitter all wrestled with challenges arising from their platforms allowing speech that is acceptable in American speech culture but unlawful or unacceptable in others.[6] The way companies solve this is often by “geo-blocking” content in some jurisdictions, making it invisible for users in that country whilst allowing it in other jurisdictions since it does not violate their own standards. Increasingly, governments and courts have begun to compel companies to remove access to content violating national legislation worldwide (Canada, France, Austria, and Brazil are some examples), a development which experts are concerned about due to the extraterritorial enforcement of national law.

    Duty-of-care models

    Some countries[7] have considered a so-called duty-of-care model. Such models aim to encourage more systemic change amongst companies as opposed to targeting illegal and harmful content via specific measures, such as removal deadlines. Many academics welcome the systemic thinking approach. Li highlights that regulation on the systemic level is likely easier and more effective than regulating content itself, particularly due to the freedom of expression concerns that such approaches entail. Similarly, Douek argues that regulation should focus on the “systemic balancing” of platforms rather than focussing on specific types of speech.

    However, Keller has raised questions about the systemic duty-of-care model and how it would function alongside existing intermediary liability protections. For example, if a duty-of-care model requires companies to proactively seek out and remove content, would that mean that they are seen as active curators and therefore lose liability protections currently afforded under the EU’s E-Commerce Directive or the US Section 230? Keller highlights that such a model might actually make it more difficult to hold platforms accountable, as platforms can simply point to their obligations under the duty-of-care model.

    Outsourcing adjudication of illegality to the tech sector

    Academics have noted that, despite the move by certain governments to regulate content more directly, several governments still rely on companies to adjudicate on content’s illegality and have made this a key requirement of the law[8]. Whilst, as Douek notes, the sheer scale and technical requirements might always leave platforms as the de facto regulators of speech, there are concerns that outsourcing adjudication of content legality to private companies rather than the legal system will undermine the rule of law. According to Kaye, this lack of judicial oversight is incompatible with international human rights law.

    Keep an eye out for our next blogpost, in which we will discuss what ways forward academics have suggested to improve existing regulation and content moderation efforts.

    Did we miss anything that merits examination? Feel free to get in touch!

    Resources

    Balkin Jack, (2019), How to regulate (and not regulate) social media.

    Li Tiffany (2019), Intermediaries and private speech regulation: a translatlantic dialogue – workshop report, Boston University School of Law.

    Zittrain Jonathan (2019), Three Eras of Digital Governance.

    Caplan Robyn (2018), Content or Context Moderation? Artisanal, Commuity-Reliant, and Industrial Approaches, Data & Society.

    Citron Danielle (2018), Extremist Speech, Compelled Conformity, and Censorship Creep, Notre Dame Legal Review.

    Klonick Kate (2018), The New Governors: The People, Rules, and Processes Governing Online Speech, Harvard Legal Review.

    Keller Daphne (2018), Internet Platforms: Observations on Speech, Danger, and Money, Hoover Institution.

    Keller Daphne (2019a), The EU’s Terrorist Content Regulation: Expanding the the Rule of Platform Terms of Service and Exporting Restrictions from the EU’s Most Conservative Member States, Stanford University Center for Internet and Society,

    Kelle Daphner, (2019b), Who Do You Sue?, Hoover Institution.

    Keller Daphne, (2020), Systemic Duties of Care and Intermediary Liability, Stanford University Center for Internet and Society.

    Douek Evelyn (2020), Governing Online Speech: From 'Posts-As-Trumps' to Proportionality and Probability, Columbia Law Review.

    McDonald Stuart, Giro Correia Sara, and Watkin Amy-Louise (2019), Regulating terrorist content on social media: automation and the rule of law, International Journal of Law in Context.

    David Kaye (2019), Speech Police: the Global Struggle to Govern the Internet.

    [1] Both Facebook and YouTube initially had a one page document to guide decision-making.

    [2] Meaning to allow all forms of speech rather than restricting potentially harmful speech (in line with the First Amendment of the US Constitution), which many other countries do via legislation (such as Holocaust denial).

    [3] Law enforcement bodies operating within national or regional police mechanisms and reporting suspected terrorist content to tech companies for assessment and takedown against company ToS.

    [4] Including in the European Union, the United Kingdom, France, Pakistan, and the Philippines.

    [5] In the proposed French law, it was 24 hours (1h for terrorist and CSA material), in the proposed EU regulation it is one hour, and in Australia companies are compelled to remove content “expeditiously” (without specifying a timeframe).

    [6] Some early encounters of this challenge being content defaming the late Thai King Bhumibol, or the founder of Turkey, Mustafa Kemal Atatürk.

    [7] The most notable case being the United Kingdom.

    [8] Germany’s NetzDG law is one example.