8 min read

The Online Regulation Series | The United States

You can access the ORS Handbook here

Online regulation and content moderation in the United States is defined by the First Amendment right to freedom of speech and Section 230 of the Communication Decency Act 1996, which establishes a unique level of immunity from legal liability for tech platforms. It  has broadly impacted the innovation of the modern Internet, causing global effects beyond the US. Recently, however, the Trump Administration administered an executive order directing independent rules-making agencies to consider regulations that narrow the scope of Section 230 and investigate companies engaging in “unfair or deceptive” content moderation practices. This shook the online regulation framework and resulted in a wave of proposed bills and Section 230 amendments from both government and civil society.

US’ regulatory framework:

  • First Amendment law under the US Constitution outlines the right to freedom of speech for individuals and prevents the government from infringing on this right, for example by banning certain types of speech.
  • Section 230 of the Communication Decency Act of 1996 establishes intermediary liability protections related to user-generated content in the US, meaning that tech companies are not seen as liable for content posted by their users.

Relevant national bodies:

  • Federal Communications Commission (FCC) regulates interstate and international communications by radio, television, wire, satellite and cable in all 50 states, the District of Columbia and US territories.
    • An independent US government agency overseen by Congress, the commission is the primary domestic authority for communications law, regulation and technological innovation.

Key takeaways for tech companies:

  • First Amendment law establishes Internet platforms as being in control of their own content policies and codes of conduct.
  • Under Section 230, web hosts, social media networks, website operators, and other intermediaries are largely shielded from being held liable for user-generated content. Companies are able to moderate content on their platforms without being held accountable.
  • However, this might change soon. There are currently two bipartisan bills for Section 230 which experts say have a chance of passing:
  • Further, President Trump issued an Executive Order in May 2020 in which he directed independent rules-making agencies, including the FCC, to consider regulations that narrow the scope of Section 230 and investigate companies engaging in “unfair or deceptive” content moderation practices.

Freedom of expression online

Legally speaking, regulation of online content and of content moderation practices by technology companies operating in the US has been limited to date. This is due to two principal legal frameworks that shape the US’ freedom of expression online: The First Amendment to the US Constitution and Section 230 of the Communications Decency Act (CDA).

The First Amendment outlines the right to freedom of speech for individuals and prevents the government from infringing on this right. Internet platforms are able to establish their own content policies and codes of conduct. Section 230 of the Communication Decency Act of 1996 (CDA) establishes intermediary liability protections related to user-generated content in the US. The broad immunity granted to technology companies in Section 230 states that “no provider or user of an interactive computer service shall be treated as a publisher or speaker of any information provided by another information content provider.” Companies are, therefore, able to moderate content on their platforms without being held accountable. In other words, online platforms have the freedom to police their sites and restrict material as they see fit, even if speech is constitutionally protected. For example, this protects platforms from lawsuits if a user posts something illegal, although there are exceptions for copyright violations, sex-work related material, and violations of federal criminal law. It is important to note that Section 230 of the CDA is unique to American law: European countries, Canada, Japan, and the vast majority of other countries do not have similar statutes on their books.

The historical context behind Section 230 is complex, but it gives an illuminating look into the culture of free speech in the US and its relation to content online. The statute was the product of debates over pornography and other “obscene” materials in the early 1990s. With the advent of early internet services like CompuServe or Prodigy, US Courts tried to understand whether those service providers were to be treated as “bookstores” (neutral distributors of information) or as “publishers” (editors of that information) when adjudicating their standing under the First Amendment. A court ruled that CompuServe was immune to liability because it was similar to a bookstore, while Prodigy did not get the same immunity due to its enforcement of its own content moderation policies – thereby, making it a publisher. In other words, companies were incentivised to not engage in content moderation in order to preserve their immunity. Section 230 of the CDA sought to change this mismatch of incentives by preserving the immunity of these platforms and providers while they engage in content moderation.

Recent Amendments

The question of content moderation has to some extent developed into a partisan cleavage between the liberal Democratic Party and the conservative Republican Party in recent years. Democrats tend to claim that online platforms do not moderate enough and are therefore complicit in the spread of hate speech and disinformation. Republicans, on the other hand, often argue that these companies moderate too much, producing an alleged ‘liberal bias’ that they say undermines ‘conservative’ content. As a result, there has been a flurry of recent legislative and executive proposals to influence content moderation.

In June 2019, Republican? Senator Josh Hawley introduced the “Ending Support for Internet Censorship Act,” which seeks to amend Section 230 so that larger internet platforms may only receive liability protections if they are able to demonstrate to the Federal Trade Commission that they are “politically neutral” platforms. However, the Act raises First Amendment concerns, as it tasks the government to regulate what platforms can and cannot remove from their websites and requires platforms to meet a broad, undefined definition of “politically neutral.”

President Trump issued an executive order in May 2020 directing independent rules-making agencies, including the Federal Communications Commission, to consider regulations that narrow the scope of Section 230 and investigate companies engaging in “unfair or deceptive” content moderation practices.[1]

On June 17 this year, Senator Josh Hawley (R-MO) most recently introduced the Section 230 Immunity to Good Samaritans Act. Nominally, the Hawley bill would prevent major online companies from receiving the protections of Section 230 of the CDA unless their terms of service were revised to operate "in good faith" and publicise content moderation policies. According to Senator Hawley, “the duty of good faith would contractually prohibit Big Tech from discriminating when enforcing the terms of service they write and failing to honor their promises”. This would open companies to being sued for breaching their contractual duties, along with a fine of $5,000 per claim or actual damages, whichever is higher, in addition to attorney’s fees.

Following President Trump’s executive order, the Department of Justice issued a proposal in September for legislatively rolling back Section 230. This draft legislation focuses on two areas of reform, which, according to the DOJ are “necessary to recalibrate the outdated immunity of Section”: promoting transparency and open discourse; and addressing illicit activity online. The DOJ also shared their own recommendations for altering Section 230 with Congress. If enacted, the DOJ recommendations would pave the way for the government to impose steep sanctions on platforms if they do not move to remove illicit content, including that related to terrorism. 

Bipartisan bills

According to an evaluation of the proposed Section 230 bills by Paul M. Barrett, the deputy director of the NYU Stern Center for Business and Human Rights,two bipartisan Senate bills “have at least a chance of eventual passage”: the EARN IT Act and the PACT Act.

  • The Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2019, proposed by Senators Lindsey Graham (R-SC) and Richard Blumenthal (D-CT) in March 2020: The general idea behind the EARNT IT Act is that tech companies will have to “earn” Section 230 immunity based on their content moderation practices, rather than being granted immunity by default. The bill was proposed by lawmakers as a way to counter child sexual abuse material (CSAM). To earn Section 230 protections, the bill in March introduced a National Commission on Online Child Sexual Exploitation Prevention that would set content moderation standards for tech companies to meet. Amendments were made to the bill in July this year, including that the standards set by the commission would not be requirements, but instead voluntary recommendations. However, the changed bill would still allow states to sue tech platforms if child sexual abuse material appears on their platforms. Critics say that the bill poses a threat to Section 230 protections and encryption. For example, if child abuse material is sent through an encrypted messaging platform, states will be able to sue the platform and hold them responsible for being unable to moderate those messages. The Senate Judiciary Committee voted to approve the EARN IT Act for a floor vote on July 2, 2020. According to the Electronic Frontier Foundation (EFF), the EARN IT Act passed the Senate Judiciary Committee in September, and has since been introduced in the House of Representatives.

  • The Platform Accountability and Consumer Transparency (PACT) Act, introduced by Senators Whip John Thune (R., S.D.) and Senator Brian Schatz (D., Hawaii) in June 2020. The PACT Act focuses on promoting platform transparency and accountability. The Act includes a requirement that platforms explain their content moderation policies to users and provide detailed quarterly statistics on items removed, down-ranked, or demonetised. It would amend Section 230 to give larger platforms just 24 hours to remove content that is determined unlawful by a court. The platforms would also have to develop a complaint system that notifies users within 14 days of takedowns and to provide for appeals. Another part of the Act would allow federal regulators to bring civil enforcement lawsuits against platforms. According to Access Now’s assessment of the PACT Act, the Act’s notice-and-takedown mechanism “lacks critical safeguards and clearer procedural provisions”, but this proposal “has the potential to serve as a valuable framework with some restructuring and tweaks”.


Beyond Government – Scholars and Civil Society

Scholars and civil society have developed their own reports and recommendations to amend Section 230, and some have even proposed entirely new regulatory frameworks and agencies to oversee US content moderation.

Beside government proposals, a 2019 report, published by the University of Chicago’s Booth School of Business, suggests transforming Section 230 into a “quid pro quo benefit.” Platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230.

Another proposal comes from Danielle K. Citron, a law professor at Boston University. Citron has suggested to amend Section 230 by including a “reasonableness” standard, which would mean conditioning immunity on “reasonable content moderation practices rather than the free pass that exists today”. The “reasonableness” would be determined by a judge at a preliminary stage of a lawsuit, wherein the judge would assess the “reasonableness” of a platform’s overall policies and practices.

Regulatory framework proposals beyond Section 230

Others have yet studied another idea: the creation of a new federal agency specifically designed to oversee digital platforms. A study released in August 2020 by the Harvard Kennedy School’s Shorenstein Center on Media, Politics, and Public Policy proposes the formation of a Digital Platform Agency. The study recommends that the agency focus on promoting competition among internet companies and protecting consumers in connection with issues such as data privacy.

In a report, The Transatlantic Working Group (TWG) has emphasised the need for a flexible oversight model, in which authorising legislation could extend the jurisdiction of existing agencies or create new ones. As possible examples of existing agencies, the TWG cites the US Federal Trade Commission, the French Conseil Supérieur de L’Audiovisuel, and the British Office of Communications, or OFCOM. The TWG overlaps in some of the goals of the PACT Act, for instance in requesting greater transparency. The TWG envisions a digital regulatory body that requires internet companies to disclose their terms of service and their enforcement mechanisms.

[1] Critics have underlined that the enforcement of this order is legally debatable and raises questions regarding the administration’s approach to regulating content moderation, given the First Amendment protections do not allow anyone to determine what a private company can or cannot express. See: Why Trump’s online platform executive order is misguided, Brookings, Niam Yaraghi.

Resources:

Barrett Paul M. (2020a), “Regulating Social Media: The Fight Over Section 230 — and Beyond”, NYU Stern

Barrett Paul M. (2020b), “Why the Most Controversial US Internet Law is Worth Saving”, MIT Technology Review

Brody Jennifer, Null Eric (2020), “Unpacking the PACT Act”, Access Now

Feiner Lauren (2020), “GOP Sen. Hawley unveils his latest attack on tech’s liability shield in new bill”, CNBC.

Hawley Josh (2020), “Senator Hawley Announces Bill Empowering Americans to Sue Big Tech Companies Acting in Bad Faith

Mullin Joe (2020), “Urgent: EARN IT Act Introduced in House of Representatives”, Electronic Frontier Foundation

Newton Casey (2020), “Everything You Need to Know About Section 230”, The Verge

New America (2019), “Bill Purporting to End Internet Censorship Would Actually Threaten Free Expression Online

Ng Alfred (2020), “Why Your Privacy Could be Threatened by a Bill to Protect Children”, CNET

Robertson Adi (2019), “Why the Internet’s Most Important Law Exists and How People Are Still Getting it Wrong”, The Verge

Singh Spandana, “Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content”, New America

Yaraghi Niam (2020), “Why Trump’s online platform executive order is misguided”, Brookings


1 min read

Reader's Digest – 19 June

Our weekly review of articles on terrorist and violent extremist use of the internet, counterterrorism, digital rights, and tech policy. Reforms to...

Read More