You can access the ORS Handbook here
To follow-up on our previous blogpost on academic analysis of the state of global online regulation, we take here a future oriented approach and provide an overview of academics and experts’ suggestions and analysis of what the future of online regulation might bring.
Systematic duty of care and the future of content moderation:
With certain policy-makers around the world, notably in the UK, pursuing the possibility of mandating platforms to abide by a “systematic duty of care” (SDOC) for online content regulation, Daphne Keller has laid out possible models that a SDOC could follow, and their implications for tech platforms’ immunity from legal liability, content moderation, human rights, and smaller tech platforms. Keller divides SDOCs into two possible models: a prescriptive one, and a flexible model.
Section 230: A landmark reform?
Following the Trump Administration’s executive order in May 2020 directing independent rules-making agencies to consider regulations that narrow the scope of Section 230, the US witnessed a wave of proposed bills and Section 230 amendments from both government and civil society.
A 2019 report, published by the University of Chicago’s Booth School of Business, suggests transforming Section 230 into a “quid pro quo benefit.” Platforms would have a choice: adopt additional duties related to content moderation or forgo some or all of the protections afforded by Section 230. Paul M. Barrett embraces this concept and says lawmakers should adopt this approach for Section 230, emphasising that it provides a workable organising principle to which any number of platform obligations could be attached and that “the benefits of Section 230 should be used as leverage to pressure platforms to accept a range of new responsibilities related to content moderation”. Examples of such additional platform responsibilities would include requiring platform companies “to ensure that their algorithms do not skew towards extreme and unreliable material to boost user engagement” and that platforms would disclose data on content moderation methods, advertising policies, and which content is being promoted and to whom. Barrett also calls for the creation of a specialised federal agency, or the “Digital Regulatory Agency”, which would oversee and enforce the new platform responsibilities in the “quid pro quo” model, as well as would focus on making platforms more transparent and accountable.
Jack Balkin has suggested that governments make liability protections conditional, as opposed to the default, on the basis that companies “accepting obligations of due process and transparency. Similarly, Danielle Citron has argued that immunity should be conditioned on companies having “reasonable” content moderation standards in place. Such reasonableness would be determined by a judge.
Suggestions for new governance or regulation models:
International human rights law
David Kaye, the former UN Special Rapporteur on Freedom of Expression, has suggested that tech companies ground their content moderation policies in international human rights law (IHRL). Kaye argues that this is the best solution to solve several of the challenges highlighted by academics in our previous post. For example, international human rights law offer a global structure (as opposed to national law), and provide a framework for ensuring that both companies and governments are complying with human rights standards in a transparent and accountable manner. Further, Kaye notes that Article 19 of the International Covenant on Civil and Political Rights (ICCPR) – which mandates freedom of expression – also provides for cases where speech can be restricted, where necessary to protect others’ rights, and where necessary for public health and national security. Kaye argues that this means that platforms will be able to take action on legitimately harmful and illegal content.
Evelyn Douek, has whilst acknowledging that this approach has several benefits, questioned whether it will be efficient. Douek notes that there is a “large degree of indeterminacy” in IHRL, which according to her means that it will be up to platforms to assess content against such standards. Further, Douek worries that such standards could in theory provide companies with a basis for allowing legitimately harmful content to remain online (or vice versa), since platforms and local speech culture might differ in their interpretation of the IHRL.
Social media councils
Civil society group Article 19 has suggested the creation of an independent “Social Media Council”. They argued that this would increase accountability and transparency with regard to content moderation, without government restricting on speech via regulation targeting online content. The Council would be based on a “self-regulatory and multi-stakeholder approach” with “broad representation” from various sectors, and would apply human rights standards in content moderation review. Loosely based on other self-regulatory measures such as press regulatory bodies, the Council would not be legally binding but participating platforms would commit to executing council decisions.
This suggestion was supported by David Kaye and the Stanford University’s Global Digital Policy Incubator (GDPi). Following a working meeting discussing the suggestion, GDPi proposed that the social media council should avoid adjudicating specific cases and instead develop and set core guidelines for companies. Article 19 differed, advocating for the Council to have an adjudicatory role and serve as an appeals and review body, with a first version being launched on a national scale as a trial.
Resources:
Keller Daphne (2020), Systemic Duties Of Care And Intermediary Liability, The Center for Internet and Society, Stanford University.
Keller Daphne (2020), Broad Consequences Of A Systemic Duty Of Care For Platforms, The Center for Internet and Society, Stanford University.
Citron Danielle and Wittes Benjamin (2017), The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity, Fordham Law Review.
Citron Danielle and Franks Mary Anne (2020), The Internet As a Speech Machine and Other Myths Confounding Section 230 Reform, Boston Univ. School of Law, Public Law Research Paper.
Citron Danielle (2020), Section 230's Challenge to Civil Rights and Civil Liberties, Boston Univ. School of Law, Public Law Research Paper.
Article19 and UN Special Rapporteur on Freedom of Opinion and Expression (2019), Social Media Councils - from concept to reality.
McKelvey Fenwick, Tworek Heidi, Tenove Chris (2019), How a standards council could help curb harmful content online, Policy Options.
Balkin Jack (2020), How to Regulate (and not regulate) social media, Yale Law School, Public Law Research Paper.
Douek Evelyn (2020), The Limits of International Law in Content Moderation, UCI Journal of International, Transnational, and Comparative Law (forthcoming 2021).
Barrett Paul M. (2020a), “Regulating Social Media: The Fight Over Section 230 — and Beyond”, NYU Stern.
Barrett Paul M. (2020b), “Why the Most Controversial US Internet Law is Worth Saving”, MIT Technology Review.