Poland’s approach to online content has until recently focused on blocking terrorist content under the 2016 Anti-Terrorism Act/ However, the so-called “anti-censorship bill” introduced by Law and Justice[mfn]Prawo i Sprawiedliwość[/mfn] (the incumbent national-conservative party) could radically change the practice of content moderation in Poland. The law would both require tech companies to swiftly remove “illegal” content, including content that is not otherwise considered criminal in Poland, and prevent them from moderating platforms based on their own Terms of Service.
Poland’s regulatory framework:
Proposed legislation:
Key takeaways for tech companies:
Anti-terrorism act and surveillance law:
“Anti-censorship” law
Tech Against Terrorism's analysis and commentary
An “anti-censorship” approach to moderation – centralised power and control over tech platforms
Since its first reading in December 2020, and as demonstrated by its formal and informal titles, the “anti-censorship” bill has been championed by the Polish government as a guarantee of free expression and accuracy online made in opposition to tech sector’s “mass content blocking in the cyber pace”.[mfn]Fraser Malgorzata (2021), Wraca projekt ustawy o ochronie wolności słowa w internetowych mediach społecznościowych, CyberDefence2[/mfn] According to the Ministry, when tech platforms enforce their content moderation principles they engage in “unlawful activities”.
However, and despite its stated objective of protecting freedom of expression aim, the framing of the bill by the Polish Government as “anti-censorship” represents a significant incursion of the culture war into the online regulatory landscape by protecting of conservative users against the censorship of “leftist” tech platforms. Sehastian Kaleta, Poland’s Deputy Justice Minister, stated that the law was to protect conservative views online: “We see that anonymous social media moderators often censor opinions which do not violate the law but are just criticism of leftists' agenda”.[mfn]Brennan David (2021), Big Tech Must Be Reined in With Anti-Censorship Rules, Polish PM Says. Newsweek.[/mfn]
The law introduced by Law and Justice the PiS, the ruling right-wing conservative party, was brought forward following decisions by certain tech platforms, including Facebook and Twitter, to suspend the accounts of former US President Donald Trump during and after the US Capitol storming on 6 January. This decision was criticised by members of the PiS and Polish Government, including Sebastian Kaleta, Poland Deputy Justice Minister, who attacked the “preventative censorship of the U.S President”.[mfn]Charlish Alan and Wlodarczak-Semczuk Anna (2021), Poland targets big tech with anti-censorship law, Reuter[/mfn] The deplatforming of President Trump led the Polish Government to declare, in the form of a Facebook post by Polish Prime Minister Mateusz Morawiecki, that the tech sector’s “censorship is not and cannot be accepted”. Representatives of the Polish Government and of Poland’s ruling right-wing coalition have since then continued to frame the bill as a necessary tool to counter tech sector censorship of conservative views online; Polish Justice Minister, Zbigniew Ziobro[mfn]A former member of PiS now leading Solidarna Polska (United Poland), a “hard-right wing junior coalition partner in the Polish Government”. See: Easton Adam (2021), Poland proposes social media 'free speech' law, BBC News.[/mfn] stated that tech companies were limiting freedom of speech, outlining that the “victims of ideological censorship” were representatives of different political groups active in Poland.
This intrusion of the culture war presents a real risk that terrorists and violent extremists (TVE) will exploit these revised parameters of acceptable online speech in Poland. TVE actors often attempt to circumvent online counterterrorism efforts by posting “sanitised” content within the bounds of legality.[mfn]Terrorists and violent extremists are generally fully aware of platforms’ content moderation rules and enforcement and will sometimes try to circumvent such rules by posting content within the limits of what is acceptable on a platform to avoid deplatforming. We can easily imagine the same thing happening with the “anti-censorship” bill, with terrorist and violent extremist actors sharing content within the remit of legality on social media platforms subject to the bill.[/mfn] By prohibiting platforms from moderating content in accordance with their own terms of service, the bill also risks preventing platforms from removing violent extremist content that exists in a legal grey area, in considering which platforms often have to go beyond existing legislation to meet public expectations of keeping users safe from harmful content – in particular when such content takes the form of hate speech or incitement to hatred.
Concerns over freedom of expression and state control on online speech
Poland’s Civic Ombudsman, Marcin Wiącek, raised doubts about whether the law would effectively protect users’ fundamental rights, including the right to freedom of expression. Wiącek criticised the establishment of a “Freedom of Expression Council”, which he considers to be state interference with regard to online content and argued that this Council will lead to a situation of “top-down determinations” of what constitutes acceptable discussion on the internet.[mfn]Poland’s Civic Ombudsman, Marcin Wiącek, (2021).[/mfn] Because the Council’s decisions are to be made solely on the basis of the information provided by the user raising the complaint and the platform,[mfn]Wingfield Richard (2021), Poland: Draft Law on the protection of freedom of speech on online social networking sites.[/mfn] without hearing from third-party including experts or other users that might have been wronged by the content, such determinations are likely to be both narrow as well as unaccountable.
Concerns over the Council’s decision-making power and the risk it poses to freedom of expression have also been raised by Richard Wingfield, Head of Legal at Global Partners Digital, in relation to the additional forms of “illegal content” included in the law. In effect, the law creates a differentiated legal regime for offline content where online content that constitutes disinformation or violates decency is illegal online despite not otherwise generating criminal liability in Polish law. Despite the framing of the law as aiming to protect freedom of expression online, the vagueness of the standards applicable illegal content that is not otherwise criminal risks creating an online environment where online speech is assessed according to the governments’ own standard – and where a 5-member Council elected by the Parliament, and thus by the political party in power, is to decide on what constitute acceptable speech online.
The “anti-censorship” bill also presents risks to freedom of expression online by allowing little time for tech companies to review reports of illegal content. By requiring tech companies to review reports of illegal content and other user complaints within 48 hours, Poland follows the trend of online regulation that fails to account for the fact that correctly assessing reports of illegal content requires time and expertise if tech companies are not to err on the side of over-removal to avoid penalties. This is even more of a risk in Poland, as the bill states that only up to three representatives can make a determination on the reports received. Not only will tech companies lack the time to undertake adequate adjudication, but they will also be limited in the resources they can deploy to do so.[mfn]Ibid.[/mfn]
As such, the “anti-censorship” bill is contradictory in what it requires of tech platforms. On the one hand it risks inciting platforms to over-remove content, including content that is not illegal, by requiring them to make rushed decisions within 48 hours or face fines. On the other hand, it incites them not to remove content as a misjudgement on the part of the platform could be brought up to the Freedom of Expression Council and potentially also lead to a fine.
A fragmented regulatory landscape with conflicting requirements for tech companies
When passed, the “anti-censorship” bill will be the first law legally requiring tech companies not to moderate their services in accordance with their own rules, and thus the first online regulation to limit the capacity of the tech sector to act against content that is obviously harmful but not explicitly illegal.[mfn]It is worth noting that the President of Brazil, Jair Bolsonaro, issued a decree on 6 September also aimed at limiting tech platforms’ capacity to moderate based on their own terms. Using a similar rhetoric to Poland’s anti-censorship arguments, the decree outlined the “just causse” for the removal of content (including illegal content and incitement to violence) and prohibited platforms from removing content beyond these “just causes”. The decree, issued with immediate effect, was to be ratified by Congress to become law, but was struck down by the Brazilian Senate, which stated that the decree was unilaterally changing Brazil’s internet legal framework. See: France24 (2021), Bolsonaro issues decree limiting social media moderation; and Camelo Janaina (2021), Senate strikes down Bolsonaro’s social media decree, The Brazilian Report. For more information about the online regulatory framework in Brazil, see our Online Regulation Series Handbook and updated blog on Brazil (November 2021).[/mfn] As such, the “anti-censorship” bill further fragments the global regulatory landscape, in which the burdens of compliance on platforms when moderating content and interactions on their services are both multiple and variable. However, by aiming at preventing platforms from applying their own moderation rules, Poland adds another layer of complexity to the question of online regulation and content moderation; the law directly contradicts the global trend in online regulation, which is generally characterised by an increased pressure on platforms to moderate their services and act against not only illegal but more generally harmful content.
In practice, this means that platforms are not only required to comply with an increasing number of stringent national laws on online content, but they are also required to abide by contradictory legal requirements. For instance, a platform operating in both the UK and Poland would have to remove “legal but harmful” content under the UK Online Safety Bill, whilst risking heavy fines for actioning the same content in Poland. Since Poland is a member of the EU, the platform would also have to comply with the EU Digital Services Act under which platforms can be held liable if they have “actual knowledge” of illegal content on their services, with a user report qualifying as “actual knowledge”.[mfn]Poland’s anti-censorship bill, thus goes against an attempt at a unified EU regulatory and enforcement mechanism under the Digital Services Act. Poland also fails to guarantee the application of the EU Charter of Fundamental Rights, particularly those guaranteeing protection against hate speech. See: Pirkova (2021).[/mfn]