Since the 2000s Ireland has emerged as the preferred destination for many leading tech companies, and for many members of the Global Internet Forum to Counter Terrorism (GIFCT) – including all founding members: Meta, Twitter, and Microsoft[mfn]The fourth GIFCT founding member, Youtube, does not have headquarters in Ireland, however, Google does.[/mfn] – to establish their EU headquarters. This concentration of tech headquarters in Dublin places Ireland in a unique position from the perspective of online regulation, in that all major tech companies will have to abide by Ireland’s forthcoming Online Safety and Media Regulation Bill. Dublin is also the overseer of tech companies’ compliance with EU regulations[mfn]To learn more about the EU online regulatory framework, see our Online Regulation Series Handbook (pp. 97 - 106). [/mfn] as a result of the “home country supervision”[mfn]This principle stipulates that “only the country where the service provider has legal residence is entitled to impose corrective measures”. However, this principle is being challenged by certain EU Member States, in particular with regard to its application for the upcoming DSA. At the forefront of this challenge to the “home country supervision” principle is France, which has been arguing that this principle should be replaced by “country of destination”, whereby a platform could be subjected to the jurisdictions of the countries in which it operates.
See: Bertuzzi Luca (2021) , Ireland draws a red line on country of origin principle in DSA, Euractiv.[/mfn] principle of the EU e-Commerce Directive, and this is to be reinforced by the upcoming Digital Service Act.[mfn]On the Digital Services Act, see our entry dedicated to the EU online regulation framework in our Online Regulation Series Handbook (pp. 97 – 106)[/mfn] Ireland is already entrusted with superintending tech companies’ compliance with the EU’s General Data Protection Regulation (GDPR).[mfn]See: Ireland Data Protection Commission; and Lillington Karlin (2020), Enforcement proves the Achilles heel for GDPR, The Irish Times.[/mfn]
Ireland’s regulatory framework:
Main regulatory bodies:
Key takeaways for tech companies:
Online Safety and Media Regulation Bill
Online Safety Codes
Media Commission
Tech Against Terrorism’s Analysis and Commentary
Similar yet different to other online regulations
The OSMR bears similarities to other online regulations passed or proposed since 2017 in its broad scope of application, embracing as it does online service providers across the tech ecosystem, and in its requirements for platforms to act against online content that is considered harmful despite not otherwise carrying criminal liability under Irish law.
On the question of content that is harmful but otherwise legal, the OSMR is interesting in being one of the few regulatory proposals analysed in the Online Regulation Series to explicitly acknowledge that not all “harmful” content categories listed are considered criminal under domestic law. Irish policymakers further explain this difference between harmful online content and criminal content, and the reason for prohibiting non-criminal content, in the explanatory note for Part 4 of the Bill. To further clarify what is considered criminal content, Global Partners Digital (GPD) recommended that the Irish government maintain a full list of criminal offences covered by the Bill.[mfn]Global Partners Digital Response (2021), Written Submission on the General Scheme of the
Online Safety and Media Regulation Bill.[/mfn] Even though Irish policymakers are rather explicit in stating that not all harmful content is unlawful, it nonetheless follows a key trend in online regulation by creating a differentiated regime for what is considered acceptable speech online against offline. The Irish Council for Civil Liberties expressed its concerns with the inclusion of non-illegal speech,[mfn]Kirk Niamh, Farris Elizabeth, Shankar Kalpana.[/mfn] which it considers as an infringement of fundamental rights and freedom of speech[mfn]Kirk Niamh, Farris Elizabeth, Shankar Kalpana.[/mfn]
In differentiating between content that is criminal and content that is harmful, the Bill also specifies that private communication and file-hosting services will only have to comply with the provisions of the future Safety Codes that related to criminal content. Despite this exception, concerns about the Bill’s applicability to encrypted platforms, and what the Safety Codes will expect of such platforms in acting against criminal content, nonetheless remain valid. As Tech Against Terrorism outlined in our landmark report assessing terrorist use of end-to-end encrypted (E2EE) services, any legal insistence that platforms offering E2EE systematically monitor their services for illegal content would require them to break encryption and the promise of privacy it entails, significantly infringing on online privacy and security in the process. The Safety Codes should therefore clarify what will be expected of private communication and file-hosting services, and how the Bill will safeguard the right to privacy online.[mfn]Irish Council for Civil Liberties (2021), ICCL submission to Pre-legislative scrutiny of the General Scheme of the Online Safety and Media Regulation Bill.[/mfn]
Throughout the Bill, Irish policymakers present different factors that the Commission should consider in issuing the Safety Codes and related practical guidance material, as well as in designating the online services falling within scope of the Bill. The provisions outlined in the Bill and the related explanatory notes thus state that the tackling of harmful online content is to be conducted in a proportionate manner by considering platforms’ size and services, recognising the potentially negative impact of automated moderation on users’ rights, as well as in a manner capable of safeguarding fundamental rights. In doing so, the Bill appears to have incorporated the different criticisms raised by tech sector and digital rights experts in light of other online regulations passed in recent years which have been criticised for failing to account for platforms’ capacity and for risking breaches of fundamental rights – including by mandating or encouraging the use of automated moderation tools.[mfn]Tech Against Terrorism (2021), The Online Regulation Series Handbook.[/mfn] It remains to be seen how the Media Commission will weigh these different factors when implementing and overseeing the OSMR.
Lack of clarity and risks to fundamental rights
As it stands. the OSMR Bill delegates substantial power to the future Media Commission to decide which online services are to fall within the scope of the law, and to expand the provisional schedule of what constitutes harmful content online. This creates uncertainty as to how the law will work in practice both in the present and future climate. Tech Against Terrorism acknowledges that the flexibility of the bill, in including provisions related to the future expansion of its scope, is helpful in permitting adaptation to emerging online threats. However, without the proper safeguards there is a risk of the OSMR significantly infringing on users’ fundamental rights, notably on freedom of expression, as there is no fixed limit to its scope of application. The ICCL has also raised similar concerns as to the actual role of the Online Safety Commission, which it considers that the Bill does not provide.[mfn]Irish Council for Civil Liberties (2021).[/mfn]
The ICCL, Digital Rights Ireland, and the Irish Human Rights and Equality Commission expressed their concerns about the lack of clarity in the Bill during a hearing with the Oireachtas Media Committee: “It is wholly unclear who can expect to be regulated by the proposed Media Commission and when”. According to the three organisations, the lack of specific definition and unclear scope of the Bill, as well as the broad powers allocated to the Media commission could “restrict the voices of internet users”.[mfn]Extra.ie (2021), Social media Bill could ‘seriously damage’ users’ constitutional rights.[/mfn]
GPD expressed further concerns about the “systematic complaint scheme”, which it considers likely to lead to an over-reporting of platforms “not doing enough” rather than to highlight cases where over-compliance is leading to the removal of legal and harmless content. GPD notes that this “risks creating an imbalanced picture of what online service providers are doing when it comes to compliance, focusing solely on under-compliance rather than over-compliance,” which might encourage platforms to over-remove content to prevent complaints via the “systematic complaint scheme”.
Harsh penalties and risks for tech sector diversity
The OSMR allows the Media Commission to sanction platforms with harsh penalties for non-compliance. These range from hefty administrative fines to blocking access to an online platform within Ireland and individual liability against platforms’ employees.
Tech Against Terrorism has previously expressed its concerns about the use of hefty financial sanctions for non-compliance with online regulation, which risk penalising smaller and newer platforms and incapacitating them, instead of offering them the support needed to counter terrorist content online. Fines also carry the risk of reducing competition in the tech sector if smaller platforms are financially compromised and made permanently uncompetitive, and only larger tech companies are able to pay them.
Tech Against Terrorism acknowledges that legal liability against tech platforms’ senior management is limited to secondary liability, and that regulators carry the burden of proof in establishing that such an employee culpably failed to heed a compliance or warning notice issued by the Commission: “e offence is proved to have been committed with the consent or connivance of, or to have been attributable to any neglect/wilful neglect […]”. However, as explained in the Online Regulation Series Handbook, Tech Against Terrorism warns against such provisions which risk criminalising individuals engaged in countering the diffusion of terrorist and violent extremist material, rather than those responsible for diffusing such content.
Laws on illegal and harmful content enacted in Europe often influence online regulation globally, with the inherent risk that non-democratic countries will use laws passed in Europe as a justification, or model, for stringent legal restrictions on online speech. There is thus a risk of non-democratic countries replicating provisions on legal liability for tech platforms’ employees, which may risk platforms and their employees becoming the targets of crackdowns on political dissent and non-violent speech – in particular in countries where the definitions of terrorism and violent extremism are broad.
Additional resources
PWC (2020), 7 things you need to know about the proposed Online Safety and Media Regulation Bill.
McCurry Cate (2021), Bill to regulate online harmful content ‘damages' constitutional rights, Oireachtas committee told, The Irish News.