The Indonesian government began regulating online content in 2008, when Law No.11 on Electronic Information and Transaction was passed, and, according to Freedom House’s 2021 Freedom on the Net report on Indonesia,[mfn]Freedom House (2021), Freedom on the Net: Indonesia.[/mfn] routinely requires online content to be removed. In 2021,[mfn]As of April 2021[/mfn] the Ministry of Communications and Informatics reportedly had 20,453 pieces of online terrorism content removed.[mfn]Freedom House (2021)[/mfn] However, platforms were until recently exempted from legal liability; this changed with the passing of Ministerial Regulation 5 (MR5), which holds platforms legally liable for user-generated content and makes safe harbour protection conditional on platforms’ compliance with legal requirements and cooperation with the Indonesian authorities. Indonesia’s restrictions on online content can also be understood in the context of the country’s counterterrorism provisions and the government’s use of internet throttling and shutdowns, as well as social media blockages, at times of political unrest. This broader environment has led human rights organisations to raise concerns about the risk of infringing on fundamental rights, and on the right to freedom of expression in particular.
Indonesia’s regulatory framework:
Relevant national bodies:
Key takeaways for tech platforms:
Law No.11 of 2008 and its 2016 amendment:
Ministerial Regulation 5
Tech Against Terrorism's analysis and commentary
The MR5 risks flouting international human rights standards and best practice
The MR5’s broad terms and stringent requirement to remove content flagged by the Kominfo has led experts and civil society groups to warn that it risks putting Indonesia’s online regulatory framework in direct opposition to international human rights standards and also threatens freedom of expression online.
Like other online regulations analysed in the Online Regulation Series, including the UK and Poland, the scope of content prohibited under the MR5 is broad and goes beyond what is normally considered illegal under domestic law by covering content that can lead to “public unrest and disturbance of public order” or provide instruction on accessing prohibited content (for example, by means of a VPN). In so doing, the MR5 creates a differentiated legal regime for online speech and, according to the Global Network Initiative (GNI), “contradict[s] international best practice” by creating a double standard of “legality”.[mfn]Global Network Initiative (2021), GNI Expresses Concerns About and Calls on Indonesia to Reconsider the ‘MR5’ Regulation.[/mfn]
Concerns relating to international best practice have also been raised with regard to the MR5’s lack of precise definition of what constitutes prohibited content. This imprecision gives the Ministry the power to decide on what should be removed for being provocative of “public unrest and disturbance of public order”. As the GNI outlined in its analysis of the law, and as Tech Against Terrorism raised on multiple occasions throughout the first Online Regulation Series, the determination of limits to freedom of expression online should be undertaken by an independent judicial authority, and not by an executive authority charged with enforcing the law. This would conform to international human rights standards, the MR5’s incompatibility with which is heightened by explicit statements made by the Indonesian government that platforms should comply with Kominfo’s orders to remove prohibited content even if the content is otherwise considered legal under international human rights law.[mfn]Rodriguez (2021)[/mfn]
The broadness and vagueness of MR5’s prohibited content category should also be understood in the context of the already broad definition of terrorism in Indonesia’s counterterrorism framework – Law No.15 of 2003, as amended in 2018. At the time the counterterrorism law was amended, human rights organisations, including Amnesty International[mfn]Global Network Initiative (2021)[/mfn] and Human Rights Watch,[mfn]Human Rights Watch (2018)[/mfn] expressed concerns that the broad scope of the law could be used to restrict freedom of expression and target political dissent as terrorist activity. The Global Network Initiative’s analysis of the MR5 underlined how the combined broad definitions of terrorism in Law No.15 and the stringent requirements of MR5 “are usually a recipe for overbroad content removal and other unintended consequences”.[mfn]Global Network Initiative (2021)[/mfn]
The criminalisation of providing instruction on how to access prohibited content also undermines the right to seek information. As underlined by Article19’s analysis of the law, circumvention tools can be used for legitimate reasons and notably to protect one’s right to privacy online. Article19 also references the report by the UN Special Rapporteur on Freedom of Opinion and Expression on encryption and anonymity in the digital age, which states that both encryption and anonymity should be protected to ensure the privacy and security necessary for freedom of expression online, and that limitations to either should be therefore be necessary, proportionate, and legitimate – in line with international human rights standards. However, the broadness of scope and significant sanctions of the MR5 are in breach of international human rights standards according to Article19.[mfn]Article 19 (2021)[/mfn]
Unclear legal liability scheme for ISPs’ employees
MR5’s requirement for tech companies to establish a Point of Contact (PoC), responsible for content removal and data access, creates an unclear liability framework in which platforms’ employees may acquire individual liability for corporate actions. Article19’s analysis of the MR5 details the human rights concerns associated with the appointment of a local employee responsible for government requests and stresses that individuals so appointed may “face heightened risk[s] of reprisal or judicial harassment” and therefore engage in self-censorship or pro-active removal to obviate government pressure.[mfn]Article 19 (2021)[/mfn]
In our first edition of the Online Regulation Series Handbook, Tech Against Terrorism warned against the introduction of legal liability for platforms’ employees which would risk criminalising those acting against the dissemination of terrorist content instead of those sharing such content. Concern over legal liability for platforms’ employees are also shared by other digital rights associations, including the Global Network Initiative.[mfn]Global Network Initiative (2021)[/mfn]
The MR5 compromises the security and privacy of end-to-end encryption
As raised in our landmark report on “Terrorist Use of E2EE: State of Play, Misconceptions, and Mitigation Strategies”, Tech Against Terrorism cautions against government regulations that require tech platforms to modify their encryption systems and processes, or could weaken encryption by mandating tech companies to introduce monitoring tools.
By applying indiscriminately to all types of online services, and to public and private communications alike, the MR5 threatens online security and privacy with its requirement for tech companies to monitor encrypted communications and thereby counter the dissemination of prohibited content
Requirements for platforms to grant law enforcement access to electronic data and systems (both for monitoring and oversight purposes) threatens encryption, because it is impossible for tech companies offering E2EE to comply with such provisions and provide law enforcement with the required data. Given that tech platforms risk liability for user-generated content if they do not cooperate with the Indonesian authorities, this poses a direct threat to E2EE services in Indonesia and by extension to the online security and privacy of Indonesians; tech companies will have to modify their systems to comply with the law, presumably by abandoning E2EE, or withdraw from Indonesia entirely.
Tech Against Terrorism raised similar concerns for E2EE with regard to other legislation covering both private and public channels of communication in the first edition of the Online Regulation Series Handbook, notably in the case of Singapore (pp. 62 – 64) which was the first country to pass online regulation applying to all types of online services.
Lack of consideration for tech sector diversity and increased reliance on automated tools
Indonesia’s online regulatory framework, and in particular MR5, applies to all online platforms, regardless of their specific offering and resources. As Tech Against Terrorism has previously noted in its analysis of online regulation, indiscriminately applying regulation with stringent practical requirements to smaller and larger platforms alike risks punishing smaller and newer platforms for lacking resources instead of providing them with the support needed to counter terrorist material and other illegal content.
The “obligation of results”, whereby an outcome is mandated by law with little practical guidance, for platforms to remove or block access to all prohibited content, whilst ensuring that the services cannot be used to facilitate the dissemination of such content, strongly incentivises ISPs to rely on automated content moderation tools. As Tech Against Terrorism has previously highlighted in the Online Regulation Series Handbook, most automated tools currently lack the capacity to comprehend content and require human supervision to avoid excessive content takedown. An increased reliance on automated moderation solutions thus raises the risk of false positives in taking down content that is legal and raises questions about accountability in removal decisions. The use of automated solutions to detect and remove terrorist content is also not straightforward in practical terms. These solutions are no substitute for reasoned consensus on what constitutes a terrorist organisation, and their determinations must be informed by systematic proscriptions and designations by domestic and international organisations of government.
To learn more about the risks posed by lack of consideration for smaller platforms and the increased reliance on automated content moderation tools, see Section 1 of the Handbook on the State of Online Regulation (pp. 13 – 29).
To learn more about automated tools to counter terrorist use of the internet, existing challenges and recommendations, see Tech Against Terrorism’s report on “Gap Analysis and Recommendations for deploying technical solutions to tackle the terrorist use of the internet” – this report was drafted by Tech Against Terrorism as the chair of the Global Internet Forum to Counter Terrorism’s working group on Technical approaches.