News

THE ONLINE REGULATION SERIES | TANZANIA

Written by Adam Southey | Nov 25, 2021 3:59:01 PM

Tanzania began regulating online content in 2015 with the Cybercrimes Act which delineates what constitutes prohibited online content. Tanzania’s regulatory framework has recently been consolidated by the 2020 Online Content Regulations, which replace the 2018 Regulations, expand the scope of prohibited online content and further reduce the deadline for platforms to remove prohibited content from twelve hours to two hours.

Tanzania’s regulatory framework

Relevant national bodies

Key takeaways for tech companies

2020 Online Content Regulations

  • The 2020 Regulations covers all online content accessible by the public in Tanzania,[mfn]The previous 2018 Regulations applied to content shared both publicly and privately. See: Taye Berhan (2020), Internet censorship in Tanzania: the price of free expression online keeps getting higher, AccessNow.
    [/mfn] but exempts content transmitted via private communications.[mfn]A&K Africa Legal Network (2020)[/mfn]The Regulations apply to all users, internet service providers, online content service providers (including blogs, websites, public accounts and instant messaging tools).
  • It is unclear whether and how the 2020 Regulations apply to foreign platforms hosting content accessible from within Tanzania, or to “online content hosts”.[mfn]Ibid.[/mfn] However, content hosting platforms are required to have a “Code of Conduct” for hosting content and to ensure the removal of prohibited content when notified of such by the TCRA or by an affected party. In its assessment of the law, the Nigerian law firm Anjarwalla & Khana (A&K) assesses that the legal requirement on domestic content hosting platforms is equally applicable to foreign platforms, which could therefore be held liable for prohibited content if failing to comply with removal notifications.[mfn]A&K Africa Legal Network (2020[/mfn]
  • The 2020 Regulations prohibit different categories of online content and are broader in scope than the 2018 Regulations. Prohibited online content now includes hate speech and content that “advocates hate propaganda”, as well as content that “may threaten national security or public health and safety”, which includes “circulating information and statements with regards to possible terrorist attacks” and “making available instructions and guidance on bomb-making”. The 2020 Regulations require internet service providers and OCSPs to immediately remove prohibited content by notifying the user who shared the content. If the user fails to remove the content within two hours, platforms will need to block their access to the service. Platforms are also to remove within two hours illegal content notified by the TCRA.
  • The 2020 Regulations also require OCSPs to ensure that content on their services is safe and lawful, including by using moderating tools to proactively filter content and take corrective measures when necessary.[mfn]Media Council of Tanzania (2020).[/mfn]
  • In addition to content removal, the 2020 Regulations instruct platforms to make publicly available content policies or guidelines, and to account for the trends and cultural sensitivities of the general public.
  • Any user must be able to submit a complaint to alert a service provider of prohibited content. The OCSP must resolve the complaint within twelve hours after receiving it; if it fails to do so the user can refer the complaint to the TCRA within 30 days.[mfn]A&K Africa Legal Network (2020)[/mfn]
  • The legal liability for service providers is unclear in the 2020 Regulations. In its interpretation of the law A&K analyses that legal liability for failing to remove unlawful content will “likely only attach after a service provider fails to comply with an order from a competent authority, such as the TCRA, rather than a user.”
  • The 2020 Regulations require OCSPs to obtain a license from the TCRA to provide “online content services”, with different licences available according to the main types of content enabled or provided by the platform.
  • The 2020 Regulations do not include provisions on penalties specific to service providers failing to comply with the legal requirements. However, the Regulations stipulate that any person contravening the provisions will have to pay a fine of at least $2,172 (approx.)[mfn]Five million Tanzanian Shillings[/mfn] and/or face a term of imprisonment of not less than twelve months.[mfn]The step penalties, including the possibility of a prison sentence, have been criticised by digital rights advocates.
    See: Tay (2020)[/mfn]

Cybercrimes Act of 2015

  • The Cybercrimes Act prohibits certain online content, including racist and xenophobic motivated insult or material, as well as content that “incites, denies, minimises, or justifies” acts of genocide or crimes against humanity. Legal liability and thus the penalty for sharing prohibited content attaches to the user creating or sharing the content rather than the service providers.
  • The Act stipulates that hosting providers,[mfn]Defined as platforms “provid[ing] an electronic data transmission service by storing information provided by a user of the service”[/mfn] are not liable for user-generated content shared on the services. However, this safe harbour protection is conditional on the platform being unaware of prohibited content on the service, and on removing or blocking access to content in compliance with an order received from a court or other competent authority.
  • The Act envisages that anyone should be able to report prohibited content to a platform.

Tech Against Terrorism’s analysis and commentary

Risks for digital rights

As with many of the online regulations analysed in the Online Regulation Series Handbook, the 2020 Regulations, and those issued in 2018 which they supersede, have been criticised by digital rights advocates and legal experts for the risk they present of impacting negatively on users’ digital rights.

Vague definitions of prohibited content categories, a short removal deadline, compulsory automated moderation solutions - and in particular filtering tools - are all key trends in online regulation that have been criticised for promoting an overabundance of caution whereby platforms over-remove and block online content that is legal or non-violent to comply with stringent legal requirements. The risk of over-removal, according to AccessNow in its analysis of the 2020 Regulations, endangers in turn the freedom of access to information. In AccessNow’s assessment, the 2020 Regulations cause online users to limit the information they share online.[mfn]Tay (2020)[/mfn]

With only a two-hour removal deadline, the 2020 Regulations are amongst the most severe online regulations in barely allowing time for tech companies to detect and remove prohibited content. Viewed globally, the most stringent legal requirements to react to prohibited content at speed are usually limited to responding to requests from competent authorities and/or to certain categories of content, including terrorist content.[mfn]The one-hour removal deadline included in the EU Regulation on addressing the dissemination of terrorist content online is limited to requests to remove terrorist content sent by competent authorities; similarly, Indonesia and Turkey require platforms to remove prohibited content within, respectively, two and four hours, when the request is from a competent authority.[/mfn] However, the 2020 Regulations turn the short removal deadline into a blanket policy for all actioning prohibited content as well as for proactive monitoring.[mfn]Beside responding to removal requests from competent authorities within two hours, the 2020 Regulations require tech companies to detect potentially prohibited content uploaded by users, assess whether the content is in fact prohibited under the Regulations, notify the users that their content is prohibited so that the user can remove it, or if necessary, remove the content unilaterally.[/mfn] A two-hour removal requirement is nearly impossible for most tech companies to comply with, and for smaller and emerging platforms in particular, and therefore risks platforms removing non-prohibited content to comply with the law.

The 2020 Regulations also prohibit the use or distribution of tools to access prohibited content, including VPNs. As raised in our analysis of Ministerial Regulation 5 in Indonesia, which includes a similar provision, VPNs are also used by regular users to protect themselves online. Prohibiting users from using VPNs to conduct legal activities thus represents a significant risk for online security and privacy, in particular for political dissidents or marginalised communities whose safety is dependent on being able to maintain their anonymity online.

Online content and respect for “cultural sensitivities”

Beside requiring platforms to remove prohibited content, the 2020 Regulations also demand them to account for the “trends and cultural sensitivities of the general public”[mfn]Aloys & Associate (2020), Online Content Regulations 2020.[/mfn] when moderating online content. This provision is reminiscent of similar legal provisions and legislative proposals in the Global South concerning tech companies’ lack of understanding of local contexts and languages when moderating non-English content.[mfn]Mauritius justified its 2021 amendments to the Information and Communications Technology Act by criticising perceived tech platforms’ lack of compliance with local laws and insufficient moderation capacity in the local creole language. In Nigeria, the government’s decision to ban Twitter is said to have been underpinned by a “growing consensus within the government calling on Twitter to establish a local presence in order to grasp local context.” See: Idris Abubakar and Adegoke Yink (2021), Inside Nigeria’s decision to ban Twitter, Rest of World.[/mfn] However, as it stands the 2020 Regulations provide no information on what accounting for cultural sensitivities means in practice, nor on what practical steps are expected of platforms in this regard.