12 min read


Poland’s approach to online content has until recently focused on blocking terrorist content under the 2016 Anti-Terrorism Act/ However, the so-called “anti-censorship bill” introduced by Law and Justice[mfn]Prawo i Sprawiedliwość[/mfn] (the incumbent national-conservative party) could radically change the practice of content moderation in Poland. The law would both require tech companies to swiftly remove “illegal” content, including content that is not otherwise considered criminal in Poland, and prevent them from moderating platforms based on their own Terms of Service.   

Poland’s regulatory framework:

  • In 2018, the Polish Government and Facebook signed an agreement on user appeals. Under this agreement, Facebook had to establish a point of contact for Polish users to be able to request an additional review of their appeal requests (for removed content) should an appeal be dismissed by Facebook at first instance.

Proposed legislation:

  • The proposed “Act on the Protection of Freedom of Speech on Social Networking Sites”, known as the “anti-censorship bill”.
    • In December 2020, the Polish Government announced a new social media law aimed at “protecting the constitutional right to freedom of expression on social media”, [mfn]Reporters Without Borders (2021), Poland’s new social media law puts freedom of expression at risk, RSF warns.[/mfn] and at protecting users “regardless of the filtering mechanisms” used by social media platforms. In practice, the law would prevent social media platforms from removing content or suspending accounts by exclusive reference to their own Terms of Service and Community Guidelines.

    • The proposed law was brought back to the forefront of legislative discussion in Poland in January 2021 following the decision by several online platforms to suspend the accounts of former US President Donald Trump during and after the storming of the US Capitol on 6 January 2021. A first draft of the law was published on 15 January 2021.

    • The bill was then added to the government’s legislative agenda in early October 2021 and is expected to be passed by the end of 2021.

  • Proposed Amendment to the Act on National Cybersecurity System, 2021.[mfn]Gad-Nowak Madgalena and Wnukowski Marcin (2021),Polish government to pass law that will allow it more control over the Internet content and legitimize blocking access to certain websites, The National Law Review.[/mfn] The Polish Government is drafting an amendment to its National Cybersecurity legislation, which would empower the Polish Government to remove content and block websites. If passed, the amendment to the Cybersecurity legislation would allow the Minister for IT to issue a “security order” blocking access to certain IP addresses or URL names in the event of a “critical incident”. Such an order would come into force immediately and remain in force for a period of two years thereafter.

Key takeaways for tech companies:

Anti-terrorism act and surveillance law:

  • Under the 2016 Anti-Terrorism Act, Poland’s judicial authorities and security agencies can request any internet service provider (ISP) to block access to terrorist and terrorist-related content:
    • In “urgent cases of terrorist content”, the Chief of the Internal Security Agency (ISA) can request any given internet content to be blocked in Poland. The blocking order is to be reviewed 5 days later by a court to verify whether it was justified.[mfn]Panoptykon Foundation (2016), New Polish Anti-terrorism Law: every foreigner is a potential threat.[/mfn]

    • For other content related to an event of “terrorist nature”, a court can order ISPs to block access to a “specified IT Data or IT and communication services” for up to 30 days.

    • On receiving a blocking order, whether from the Chief of the ISA or a court, ISPs have to comply “immediately”.

  • Under the same law, the police can disable all telecommunications in the country if a state of emergency is declared. What state of emergency means in this instance, or what the scope of telecommunications is held to be, is not clearly defined; shutdowns as a security instrument are permitted in Polish law.[mfn]Rydzak Jan (2016), Now Poland’s Government Is Coming After the Internet, Foreign Policy.[/mfn]

  • The 2016 Surveillance Law allows the police to require any physical or electronic data without reference to an ongoing investigation.[mfn]Ibid[/mfn]

“Anti-censorship” law

  • The stated aim of the law is to safeguard users’ right to information and freedom of expression on social media platforms. The law would make it illegal for social media platforms with more than 1 million registered users in Poland to remove content not explicitly prohibited by the law itself.[mfn]See: Poland’s Civic Ombudsman, Marcin Wiącek, (2021), Projekt ustawy o ochronie wolności słowa w internecie. Wątpliwości Marcina Wiącka wobec niektórych propozycji MS, Biuletyn Informacji Publicznej RPO; and Walker Shaun (2021), Poland plans to make censoring of social media accounts illegal, The Guardian.[/mfn]

  • The law includes three key pillars:
    1. facilitated reporting of “illegal” content;
    2. appeal mechanisms and the Freedom of Expression Council;
    3. transparency reporting.

  • The bill lists four categories of “illegal” content that users can report to tech platforms:
    1. Content violating personal rights;
    2. disinformation;
    3. criminal content;
    4. content offending decency.[mfn]For instance, by praising violence.[/mfn]

  • Criminal content relates to content prohibited under the Polish Penal Code, or “praise [for] or incite[ment] to commit” criminal acts. This includes “terrorist offences and offences against the state.

  • The bill will allow for a streamlined procedure for Polish authorities to issue an order immediately blocking access to content identified by the authorities as criminal, and requiring the platform to provide the information necessary to identify the user who posted the content.

  • In line with its stated aim of protecting Polish users from tech sector “censorship”, the law will compel tech companies to introduce several appeal and redress mechanisms:
    • Platforms will have to establish a direct complaint or “internal review” mechanism for users to appeal their content or account being blocked or otherwise restricted. The same complaint mechanisms should also allow users to report illegal content to the platform.

    • Platforms will have 48 hours to assess and respond to users’ appeal requests and content reports. Failing to do so will lead to financial penalties – see below on compliance and fines.

    • Users dissatisfied with the platform’s handling of their complaints (whether user appeal or content report) will be able to submit their case to the “Freedom of Expression Council”.

    • The “Freedom of Expression Council” will be a tribunal of final appeal for users whose content or account has been removed by a platform, as well as for users whose content report was rejected. The Council, comprising five members, will assess within 7 days whether the content is permissible or prohibited under the law, including whether it violates decency or constitutes disinformation.[mfn]Poland’s Civic Ombudsman, Marcin Wiącek, (2021)[/mfn] Members will be appointed by the lower-house of Parliament for a 6-year term,[mfn]Law and Justice (“Prawo i Sprawiedliwość”, known as PiS), the right-wing conservative party currently in power in Poland, holds the majority of seats of the lower house of Parliament.[/mfn] and the process for users to submit a complaint to the Council is to be mostly online.

    • Platforms will have 24 hours to comply with any ruling to restore content or accounts.[mfn]50 mn Polish Zlotys.[/mfn]

    • If content has already been assessed to be legal by the Council, the ISP can no longer restrict access to it if re-uploaded by another user.

    • The Council is expected to proceed in closed sessions and will not take evidence other than that submitted by the user and the platforms (or its representative(s) in the country). This means that no expert or third-party opinions will be heard by the Council in its review of a content removal.[mfn]Gad-Nowak and Wnukowski (2021).[/mfn]

    • Platforms will have to appoint one to three representatives based in Poland who will be responsible for handling user complaints and liaising with the Polish authorities, including the Freedom of Expression Council. Information about these representatives, and relevant contact details, must be published on platform websites.

  • The bill will require platforms to publish online both their moderation policies and their rules for handling complaints. Platforms will also have to submit yearly transparency reports on their efforts to counter illegal content and disinformation on their services. Platforms receiving more than 100 complaints within one calendar year will be required to publish transparency reports every 6 months to detail how they handle complaints. The reporting template will be prepared directly by the Ministry of Justice, and the reports are to be published on platform websites as well as sent to the Office of Electronic Communications (“Urząd Komunikacji Elektronicznej” or UKE).[mfn]Pirkova Eliska (2021), Regulating online platforms: how EU Member States are undermining the Digital Services Act, Access Now[/mfn]

  • Beside these three pillars of the law, platforms will also be required to:
    • Disclose user information if required to do so, even in response to ‘fishing expeditions’, by claimants bringing suit for online defamation.[mfn]For instance, if the defendant posted the content anonymously or using a pseudonym.[/mfn] If a company refuses to provide the defendant’s information, it will have to pay a fine.

    • Send a representative to attend training courses on the bill and on the legal issues raised by complaints. The training courses are to be organised by the UKE, the regulatory agency in charge of telecommunications and postal activities. Platforms will also face fines if failing to do so.

  • The UKE will be in charge of superintending compliance with the law and will exercise a supervisory jurisdiction over complaints and internal reviews mechanisms, should it consider a platform insufficiently compliant.

  • Non-compliance with any of the provisions included in the bill will be penalised with a fine of between $13,000 and $13 million.

Tech Against Terrorism's analysis and commentary

An “anti-censorship” approach to moderation – centralised power and control over tech platforms

Since its first reading in December 2020, and as demonstrated by its formal and informal titles, the “anti-censorship” bill has been championed by the Polish government as a guarantee of free expression and accuracy online made in opposition to tech sector’s “mass content blocking in the cyber pace”.[mfn]Fraser Malgorzata (2021), Wraca projekt ustawy o ochronie wolności słowa w internetowych mediach społecznościowych, CyberDefence2[/mfn] According to the Ministry, when tech platforms enforce their content moderation principles they engage in “unlawful activities”.

However, and despite its stated objective of protecting freedom of expression aim, the framing of the bill by the Polish Government as “anti-censorship” represents a significant incursion of the culture war into the online regulatory landscape by protecting of conservative users against the censorship of “leftist” tech platforms. Sehastian Kaleta, Poland’s Deputy Justice Minister, stated that the law was to protect conservative views online: “We see that anonymous social media moderators often censor opinions which do not violate the law but are just criticism of leftists' agenda”.[mfn]Brennan David (2021), Big Tech Must Be Reined in With Anti-Censorship Rules, Polish PM Says. Newsweek.[/mfn]

The law introduced by Law and Justice the PiS, the ruling right-wing conservative party, was brought forward following decisions by certain tech platforms, including Facebook and Twitter, to suspend the accounts of former US President Donald Trump during and after the US Capitol storming on 6 January. This decision was criticised by members of the PiS and Polish Government, including Sebastian Kaleta, Poland Deputy Justice Minister, who attacked the “preventative censorship of the U.S President”.[mfn]Charlish Alan and Wlodarczak-Semczuk Anna (2021), Poland targets big tech with anti-censorship law, Reuter[/mfn] The deplatforming of President Trump led the Polish Government to declare, in the form of a Facebook post by Polish Prime Minister Mateusz Morawiecki, that the tech sector’s “censorship is not and cannot be accepted”. Representatives of the Polish Government and of Poland’s ruling right-wing coalition have since then continued to frame the bill as a necessary tool to counter tech sector censorship of conservative views online; Polish Justice Minister, Zbigniew Ziobro[mfn]A former member of PiS now leading Solidarna Polska (United Poland), a “hard-right wing junior coalition partner in the Polish Government”. See: Easton Adam (2021), Poland proposes social media 'free speech' law, BBC News.[/mfn] stated that tech companies were limiting freedom of speech, outlining that the “victims of ideological censorship” were representatives of different political groups active in Poland.

This intrusion of the culture war presents a real risk that terrorists and violent extremists (TVE) will exploit these revised parameters of acceptable online speech in Poland. TVE actors often attempt to circumvent online counterterrorism efforts by posting “sanitised” content within the bounds of legality.[mfn]Terrorists and violent extremists are generally fully aware of platforms’ content moderation rules and enforcement and will sometimes try to circumvent such rules by posting content within the limits of what is acceptable on a platform to avoid deplatforming. We can easily imagine the same thing happening with the “anti-censorship” bill, with terrorist and violent extremist actors sharing content within the remit of legality on social media platforms subject to the bill.[/mfn] By prohibiting platforms from moderating content in accordance with their own terms of service, the bill also risks preventing platforms from removing violent extremist content that exists in a legal grey area, in considering which platforms often have to go beyond existing legislation to meet public expectations of keeping users safe from harmful content – in particular when such content takes the form of hate speech or incitement to hatred.

Concerns over freedom of expression and state control on online speech

Poland’s Civic Ombudsman, Marcin Wiącek, raised doubts about whether the law would effectively protect users’ fundamental rights, including the right to freedom of expression. Wiącek criticised the establishment of a “Freedom of Expression Council”, which he considers to be state interference with regard to online content and argued that this Council will lead to a situation of “top-down determinations” of what constitutes acceptable discussion on the internet.[mfn]Poland’s Civic Ombudsman, Marcin Wiącek, (2021).[/mfn] Because the Council’s decisions are to be made solely on the basis of the information provided by the user raising the complaint and the platform,[mfn]Wingfield Richard (2021), Poland: Draft Law on the protection of freedom of speech on online social networking sites.[/mfn] without hearing from third-party including experts or other users that might have been wronged by the content, such determinations are likely to be both narrow as well as unaccountable.

Concerns over the Council’s decision-making power and the risk it poses to freedom of expression have also been raised by Richard Wingfield, Head of Legal at Global Partners Digital, in relation to the additional forms of “illegal content” included in the law. In effect, the law creates a differentiated legal regime for offline content where online content that constitutes disinformation or violates decency is illegal online despite not otherwise generating criminal liability in Polish law. Despite the framing of the law as aiming to protect freedom of expression online, the vagueness of the standards applicable illegal content that is not otherwise criminal risks creating an online environment where online speech is assessed according to the governments’ own standard – and where a 5-member Council elected by the Parliament, and thus by the political party in power, is to decide on what constitute acceptable speech online.

The “anti-censorship” bill also presents risks to freedom of expression online by allowing little time for tech companies to review reports of illegal content. By requiring tech companies to review reports of illegal content and other user complaints within 48 hours, Poland follows the trend of online regulation that fails to account for the fact that correctly assessing reports of illegal content requires time and expertise if tech companies are not to err on the side of over-removal to avoid penalties. This is even more of a risk in Poland, as the bill states that only up to three representatives can make a determination on the reports received. Not only will tech companies lack the time to undertake adequate adjudication, but they will also be limited in the resources they can deploy to do so.[mfn]Ibid.[/mfn]

As such, the “anti-censorship” bill is contradictory in what it requires of tech platforms. On the one hand it risks inciting platforms to over-remove content, including content that is not illegal, by requiring them to make rushed decisions within 48 hours or face fines. On the other hand, it incites them not to remove content as a misjudgement on the part of the platform could be brought up to the Freedom of Expression Council and potentially also lead to a fine.

A fragmented regulatory landscape with conflicting requirements for tech companies

When passed, the “anti-censorship” bill will be the first law legally requiring tech companies not to moderate their services in accordance with their own rules, and thus the first online regulation to limit the capacity of the tech sector to act against content that is obviously harmful but not explicitly illegal.[mfn]It is worth noting that the President of Brazil, Jair Bolsonaro, issued a decree on 6 September also aimed at limiting tech platforms’ capacity to moderate based on their own terms. Using a similar rhetoric to Poland’s anti-censorship arguments, the decree outlined the “just causse” for the removal of content (including illegal content and incitement to violence) and prohibited platforms from removing content beyond these “just causes”. The decree, issued with immediate effect, was to be ratified by Congress to become law, but was struck down by the Brazilian Senate, which stated that the decree was unilaterally changing Brazil’s internet legal framework. See: France24 (2021), Bolsonaro issues decree limiting social media moderation; and Camelo Janaina (2021), Senate strikes down Bolsonaro’s social media decree, The Brazilian Report. For more information about the online regulatory framework in Brazil, see our Online Regulation Series Handbook and updated blog on Brazil (November 2021).[/mfn] As such, the “anti-censorship” bill further fragments the global regulatory landscape, in which the burdens of compliance on platforms when moderating content and interactions on their services are both multiple and variable. However, by aiming at preventing platforms from applying their own moderation rules, Poland adds another layer of complexity to the question of online regulation and content moderation; the law directly contradicts the global trend in online regulation, which is generally characterised by an increased pressure on platforms to moderate their services and act against not only illegal but more generally harmful content.

In practice, this means that platforms are not only required to comply with an increasing number of stringent national laws on online content, but they are also required to abide by contradictory legal requirements. For instance, a platform operating in both the UK and Poland would have to remove “legal but harmful” content under the UK Online Safety Bill, whilst risking heavy fines for actioning the same content in Poland. Since Poland is a member of the EU, the platform would also have to comply with the EU Digital Services Act under which platforms can be held liable if they have “actual knowledge” of illegal content on their services, with a user report qualifying as “actual knowledge”.[mfn]Poland’s anti-censorship bill, thus goes against an attempt at a unified EU regulatory and enforcement mechanism under the Digital Services Act. Poland also fails to guarantee the application of the EU Charter of Fundamental Rights, particularly those guaranteeing protection against hate speech. See: Pirkova (2021).[/mfn]