10 min read



In October 2021, Singapore passed the Foreign Interference Countermeasures Act (FICA), intended to counter foreign interference in domestic affairs, which includes provisions relating to the online sphere. The Act empowers the Ministry of Home Affairs to adjudicate on whether content is “hostile to the city state’s interest”, and to issue orders to social media and internet providers to block content assessed to be “hostile”. Online platforms are to comply with blocking orders as well as with user information requests issued by the Singaporean authorities.[1] The FICA also grants the Minister of Home Affairs the power to ban apps in Singapore and to order the publication of “mandatory messages drafted by the authorities”.[2] The Ministry of Home Affairs is empowered to undertake one of these enforcement actions if the Minister “suspects or believes” that online communications have been carried out “on behalf of a foreign principal”.[3]

The FICA has been criticised by human rights organisations for undermining the right to freedom of expression and for threatening political opposition, independent media, and activists, since any involvement with foreigners could invite scrutiny from the Government.[4] A coalition of human rights and digital rights organisations – including Human Rights Watch, Access Now, Article 19 and the Wikimedia Foundation – have called on the Singaporean government to withdraw the FICA.

In 2019, Singapore signed The Protection of Online Falsehood and Manipulation Act (POFMA), which addresses the spread of misinformation through correction and removal orders. You can find our complete analyses of the POFMA and Singapore’s online regulatory framework in the dedicated entry in our Online Regulation Series Handbook (pp.62 – 64).


In November 2021 the Pakistani government revised the Social Media Rules – the Citizen Protection (Against Online Harm) Rules, promulgated in 2020 and effective as of 2021. The Rules, when announced in 2020, met with public opposition and criticism from digital rights activists. Tech companies, via the Asia Internet Coalition, announced that under the new Rules they would be “unable to operate in the country”.[5]

In one of the key revisions to the Rules, online platforms will now have 48 hours, instead of 24 hours, to comply with a removal or blocking order from the Pakistan Telecommunication Authority (PTA). However, platforms will still be required to comply with emergency requests within 12 hours, without the PTA having to specify the reason for the emergency.[6]

Shortly after the revised rules were published, the Islamabad High Court (IHC) announced that it would review the Rules according to international standards.[7] The review is to involve different national and international stakeholders, including tech companies. The IHC adjourned the hearing on the rule until 6 January 2022.[8] The IHC also enquired about the recent ban (subsequently lifted) on TikTok; the IHC Chief Justice remarking that a ban on online platforms “was not a permanent solution and prima facie the Pakistan Telecommunication Authority misused the Prevention of Electronic Crimes Act” when banning TikTok.[9]

You can find our complete analyses of the Social Media Rules and Pakistan’s online regulatory framework in the dedicated entry in our Online Regulation Series Handbook (pp.65 – 69).

The Philippines

In December 2021, the Supreme Court of the Philippines issued a decision upholding most of the provisions of the Anti-Terrorism Act (ATA) enacted in July 2020. However, in a significant exception, the Supreme Court struck down parts of the definition of terrorism, arguing that the definition was too broad and could criminalise expressions of civil rights such as advocacy and protests.[10]

The ATA has been contested by legal experts and human rights advocates since it was first introduced in 2020. Michelle Bachelet, the UN High Commissioner for Human Rights, had also expressed concerns that the law “could blur the distinction between criticism and criminality”.[11] Similar concerns had been raised by Amnesty International and the Committee to Protect Journalists. The contestation of the ATA by human rights and legal experts led to a number petitions being signed to the effect that the Act should be declared unconstitutional.[12] These petitions were the basis for the Supreme Court’s announcement that it would review the ATA.[13]

You can find our complete analysis of the ATA in our dedicated blog here.


In November 2021, an Indian parliamentary panel reviewing the Personal Data Protection Bill (introduced in 2019) recommended social media platforms be considered publishers and that a dedicated regulatory and oversight body be established. The panel recommended that the regulatory body should be set up on the model of the Press Council of India, and that a mechanism should be created whereby social media platforms could be held accountable for content from unverified accounts. These recommendations are based on the panel’s assessment that existing legislation on social media “hasn’t done enough in terms of regulation” and could lead to tech companies being liable for user-generated content, thus removing existing safe-harbour protection in India.[14]

You can find our complete analysis of India’s online regulatory framework in the dedicated entry in our Online Regulation Series Handbook (pp.78 – 82).


The UN National Human Rights Committee criticised Germany’s Network Enforcement Act (NetzDG) in a November 2021 report, arguing that the NetzDG “enlist[ed] social media companies to carry out government censorship, with no judicial oversight of content removal”.[15] The Committee highlighted the particular risks for freedom of expression online posed by the stringent requirements imposed by the NetzDG on online platforms – including the requirement to remove “manifestly illegal” content within 24 hours of being notified of such content.

As the Electronic Frontier Foundation highlighted in its coverage of the report, that the UN expressed concerns with the NetzDG is significant for freedom of expression online globally, given that laws and proposals containing similar provisions on content regulation have been enacted or introduced in other jurisdictions since the NetzDG in 2017, including in France, India, the EU, Russia, and Turkey.

You can find our complete analysis of Germany’s NetzDg and of the concerns for human rights it provokes, as well as our analysis of global trends in online regulation and comparison of jurisdictions following these trends, in our Online Regulation Series Handbook.

The coalition agreement underpinning the new German government[16] includes multiple elements related to the online environment and digital rights, including a “right to encryption” and “a right to anonymity”.[17] Were those rights to be inscribed in German law, Germany would lead the way in protecting digital rights.

You can find our landmark report on assessing terrorist use of end-to-end encrypted (E2EE) services – including our recommendations on mitigation strategies for tech companies and policymakers, as well as our review of global legislation impacting E2EE – here.


At the time of publishing our Online Regulation Series Handbook in early July 2021, the French Parliament was reviewing two major bills on countering terrorism and “separatism”: the 2021 Counterterrorism and Intelligence bill and the Endorsement of Respect for the Principles of the Republic and Counter-Separatism bill, both of which, on 24 August and 31 July 2021 respectively, have since been passed into law.

The law on the Principles of the Republic and Counter-Separatism, commonly known as the “law against separatism” has been presented by the French government as a key pillar of its strategy to counter Islamist radicalisation and terrorism. The original bill did not include provisions on online content; however, this was changed, following the murder of Samuel Paty in October 2021.[18] with the inclusion of the “Samuel Paty” article. This article penalises the malicious sharing of personal information online that endangers the life of others. The law against separatism also penalises individuals deliberately circumventing moderation techniques used to counter and delete content that is prohibited under French law, as well as those directly inciting, legitimising or praising terrorism both online and on messaging platforms.

The 2021 Counterterrorism and Intelligence law incorporates into France’s counterterrorism legislation, and thereby enshrines, certain elements of the emergency laws introduced following the violent Islamist terror attacks in 2015. The law most notably affirms the power given provisionally to law enforcement to conduct “algorithmic analysis” of connection data and URLs provided by telecommunication operators.[19]

You can find our complete analysis of France’s online regulatory framework in the dedicated entry in our Online Regulation Series Handbook (pp.84 - 90).

United Kingdom

The UK Joint Committee on the Draft Online Safety Bill (OSB) published its report on the Bill on 14 December 2021. The UK government has two months to respond to the report.

The Committee offers four key recommendations to “strengthen the bill”:

  • “What's illegal offline should be regulated online”, which is an argument that the criminal law should serve as the basis to regulate potentially harmful content online.

  • “Ofcom should issue binding Codes of Practice”,[20] which would assist online platforms in detecting and acting on prohibited content. These Codes are in addition to existing Codes on terrorism and child sexual abuse material. The Committee also underlines that Ofcom, in compliance with human rights legislation, should provide additional guidance on how platforms can safeguard freedom of expression.

  • “New criminal offences are needed”, specifically the Committee recommends that the Law Commission’s proposed communications and hate Crime offences be included in the OSB to criminalise harmful online activities.

  • “Keep children safe from accessing pornography” by requiring pornography sites to prevent children from accessing their content.

For a brief overview of the Committee’s report on the OSB, Tech Against Terrorism recommends reading the linked Twitter thread by online regulation expert Heather Burns ( formerly of Open Rights Group) which summarises the key elements of the report.

You can find our complete analysis of the Online Safety Bill and of the UK’s online regulatory framework in the dedicated entry in our Online Regulation Series Handbook (pp.107 - 116).

You can find our submission to the Online Safety Bill Consultation here.


One year after passing the Social Media Law (No. 7253), which most notably requires tech companies to have a formal presence in the country and to remove content within 48 hours, Turkey’s ruling party (AKP)[21] is working on a draft law to counter mis- and disinformation online.[22]

No draft of the bill has yet been published. However, there have been reports that the bill would criminalise the sharing of false information on social media with prison sentences of up to five years and that it would also establish a social media oversight body, similar to the Radio and Television Supreme Council (Turkey’s media regulator).[23]

In mid-July, Mahir Unal, the AKP’s deputy chair, announced that the party was considering legal measures to counter disinformation, stating that "[c]ombating disinformation is as important as fighting terrorism."[24] More recently, in December, President Erdogan reiterated this goal of countering disinformation, arguing that social media was “one of the main sources of threats to today’s democracy”.[25]

You can find our complete analysis of the Social Media Law and of Turkey’s online regulatory framework in the dedicated entry in our Online Regulation Series Handbook (pp.117 - 121).

United States

Calls for online regulation in the US, and for amending Section 230 of the Communications Decency Act which guarantees that tech platforms are shielded from legal liability in Federal law for user-generated content, continue to be made regularly by both Democrats and Republicans. Tech Against Terrorism recommends using Future Tense’s Section 230 tracker to track all such proposals.

At the State level, Texas passed in September 2021 the House Bill 20 (HB20) to counter “censorship” by social media, by prohibiting these online platforms from moderating content on their services. The Bill stipulates that platforms “may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person”, and that they may only remove content under certain conditions, including if authorised to do so by federal law, if the content constituted a form of unlawful expression, or directly incited criminal activity or violence.[26] However, HB20 was blocked on 1 December by a federal judge who ruled the law unconstitutional, and stated that “the government cannot regulate the editorial decisions made by online platforms about what content they host.”[27] As such, the judge reiterated that online platforms have the right to moderate content under the First Amendment of the US Constitution as “they are not ‘common carriers’ that transmit speech without curation”[28] This argument had also been raised by the Electronic Frontier Foundation, which sided with trade associations to seek judicial review ofHB20, and made the argument that platforms should not be forced to publish speech “they don’t agree with or don’t want to share with their users”.[29] 

A similar State law proposed in Florida - the Stop Social Media Censorship ACT - would have prohibited social media platforms from banning political candidates and “journalistic enterprises”, as well as from removing their content. The law had been proposed in January following the suspension of former President Trump from different social media platforms.[30] The federal judge who blocked the law argued that this effort to rein in social media platforms was “too large” and “not a legitimate government interest”. The judge also argued that it was discriminatory and in potential violation of the tech companies’ free speech rights under the US first amendment.[31]

Texas and Florida’s attempts to counter perceived social media “censorship” is reminiscent of similar legislative proposals in Poland and Brazil aimed at prohibiting social media from moderating content based on their own Terms of Service. Another similarity to Poland and Brazil’s attempt to limit social media content moderation, is that both Texas and Florida’s State laws were motivated by a sentiment that conservative voices online need to be protected from social media censorship.

[1] See: BBC News (2021), Singapore passes controversial law to counter foreign interference; and Campaign Asia (2021), Singapore's new foreign-interference law could impact social media, publishers.

[2] Human Right Watch (2021), Singapore: Withdraw Foreign Interference (Countermeasures) Bill.

[3] Human Right Watch (2021), Singapore: Withdraw Foreign Interference (Countermeasures) Bill.

[4] BBC News (2021), Singapore passes controversial law to counter foreign interference;

[5] Asia Internet Coalition (2020), AIC Submits Response to Pakistan’s Citizens Protection Rules (Against Online Harm.

[6] Anjum Usama (2021), The Revised Social Media Rules 2021: What do they Entail?, PhoneWorld.

[7] A first petition to contest the Rules had been submitted at the HIC in January 2021, Tech Against Terrorism (2021), The Online Regulation Series Handbook.

[8] Mustafa Onsa (2021), IHC Decides to Review New Social Media Rules, Phone World.

[9] Asad Malik (2021), IHC Appoints aides in social media rules case, DAWN.

[10] McCarthy Julie (2021), Philippines' high court upholds most of a terrorism law, but strikes down a key point.

[11] France24 (2021), Philippines anti-terrorism law 'threatens human rights' despite 'killer caveat' strikedown.

[12] Ul Khaliq Riyaz (2021), Philippines: Anti-terror act faces top court challenges, Anadalou Agency.

[13] See: Pazzibugan Dona Z. (2020), SC sets hearings on anti-terrorism law in September, Inquirer.net; and al Jazeera (2020), Philippine court asked to annul Duterte-backed anti-terror law.

[14] See: Pradhan Bibhudatta (2021), India Lawmakers Weigh New Regulator to Oversee Facebook, Twitter, Bloomberg; and PYMNTS (2021), India Could Create New Regulatory Body for Social Media.

[15] Baghdasaryan Meri and Gullo Karen (2021), UN Human Rights Committee Criticizes Germany’s NetzDG for Letting Social Media Platforms Police Online Speech, Electronic Frontier Foundation.

[16] Following the 2021 elections, the German Government comprises an alliance of the country’s Social Democrats, Greens and Liberals. See: Gerhke Laurenz (2021), German parties seal coalition deal to make Olaf Scholz chancellor, Politico.

[17] TutaNota (2021), Germany: New government plans ‘right to encryption’.

[18] Paty’s murder was preceded by an online harassment campaign, which is being considered in the criminal investigation. See: Ouest France (2020), Assassinat de Samuel Paty. Suspect, gardes à vue, note du renseignement… Où en est l’enquête ?; and Devillier Nathalie (2020), Lynchage de Samuel Paty sur les réseaux sociaux : comment réguler les algorithmes de la haine ?, The Conversation.

[19] The provision on algorithmics analysis has been criticised by digital right groups, including the Quadrature du Net, for its breadth. Critics argue that the lack of a specified scope of application could pave the way for mass surveillance of internet connections and usage in France. In October 2020, the Court of the Justice of the European Union had already cautioned against France’s practice of requiring user connection data to be kept for a year for counterterrorism purposes. The French Conseil D’Etat later re-asserted that this provision was justified for counterterrorism purposes. See: Jannic-Cherbonnel Fabien (2021), Projet de loi contre le terrorisme : cinq questions sur la surveillance par algorithme, une technique de renseignement critique, France Info; Le Monde (2020), La justice de l’UE s’oppose à la collecte massive des données de connexions Internet et téléphoniques par les Etats; and Charney Amelie (2021), Le Conseil d’État approuve la conservation des données de connexion... en posant quelques limites, 01Net.com.

[20] Ofcom is the UK communications regulator. Ofcom oversees the application of new regulations related to online platforms, both under the Interim Approachand the draft Online Safety Bill.

[21] Justice and Development (AKP)

[22] Hacaloglu Hilmi and Colak Umut (2021), Media Groups Voice Concern about Turkey's Planned Social Media Law, VOA News.

[23] Hacaloglu Hilmi and Colak Umut (2021), Media Groups Voice Concern about Turkey's Planned Social Media Law, VOA News.

[24] Hacaloglu Hilmi and Colak Umut (2021), Media Groups Voice Concern about Turkey's Planned Social Media Law, VOA News.

[25] al-Jazeera (2021), Turkey’s Erdogan says social media a ‘threat to democracy’.

[26] Villasenor John (2021), Texas’ new social media law is blocked for now, but that’s not the end of the story, Brookings.

[27] Rathi Mukund (2021), Victory! Federal Court Blocks Texas’ Unconstitutional Social Media Law, Electronic Frontier Foundation.

[28] Rathi Mukund (2021), Victory! Federal Court Blocks Texas’ Unconstitutional Social Media Law, Electronic Frontier Foundation.

[29] Rathi Mukund and Greene David (2021), EFF to Federal Court: Block Unconstitutional Texas Social Media Law, Electronic Frontier Foundation.

[30] Alba Davey, Koeze Ella, and Silver Jacob (2021), What Happened When Trump Was Banned on Social Media, The New York Times.

[31] Morrison Sara (2021), Florida’s social media free speech law has been blocked for likely violating free speech laws, Vox.