14 min read
You can access the ORS Handbook here
France is, alongside New Zealand, an initiator of the Christchurch Call to Action to eliminate terrorist and violent extremist content online. Prior to the Christchurch Call, France has elevated tackling terrorist use of the internet as a key pillar of its counterterrorism policy, supporting the EU proposal on Preventing the Dissemination of Terrorist Content Online, including the requirement for tech platforms to remove flagged terrorist content within one hour.
France’s regulatory framework:
- Countering online hate law, May-June 2020, the so-called “cyber-hate” or “Avia” law establishes France’s new broad framework to counter hateful, discriminatory, terrorist, and child sexual abuse (CSA) content online – all of which are illegal under French law.
- The law would compel companies to remove terrorist and CSA content within one hour of being notified by French authorities, and within 24 hours for hateful and discriminatory content.
- Following a “censuring” by the French Constitutional Council, which deemed the law to be bearing disproportionate risks to freedom of expression, the removal requirement was lifted and the law is now reduced to its preventive component.
- Law on strengthening the provisions relating to the fight against terrorism, November 2014, strengthens France’s counterterrorism approach and introduces the penalisation of “terrorism apology” (apologie du terrorisme) and incitement, including for content shared online.
- Signatory and co-initiator of the Christchurch Call to Action.
Main oversight and regulatory bodies:
- Conseil Superieur de l’Audiovisuel (CSA), an independent body which oversees broadcast communications (TV and radio) in France:
- Under the new “cyber-hate” law, the CSA will coordinate an “Online Hate Observatory” to analyse the spread of hate online.
- Ministry of Interior, oversees – alongside judicial authorities – reports of terrorism apology and incitement, including for online content.
- The Ministry of the Interior manages Pharos, France’s online content reporting platform.
- Office central de lutte contre la criminalité liée aux technologies de l’information et de la communication (Cybercrime unit) which takes part in the coordination of content reported via Pharos and liaise with Europol’s Internet Referral Unit.
- State Secretary for Digital Affairs, coordinates France digital policy and related discussion on the on the online regulatory framework (both at the national and international level).
- Digital Ambassador, coordinates international digital policy and transformation issues, including cyber security and online regulation.
Key takeaways for tech platforms
- Despite recent attempts, including the “cyber-hate” law, France does not currently regulate tech platforms.
- However, certain content is considered illegal under French law, including terrorist (incitement and apology) content.
- French authorities can require a website to be blocked or a piece of content to be removed if terrorist content is located
- Authorities can require that a website or piece of content is removed from French search engine results
- Individuals posting terrorist content risks seven years’ imprisonment and a 100,000 euro fine
- Internet users can report illegal content to the French authorities via Pharos, a platform dedicated to user reporting of illegal content online.
Towards a more stringent framework?
To complement France’s 2014 legal framework on online terrorist content, a law on countering online hate was submitted to Parliament on 20 March 2019, only a few days after the Christchurch shooting. The “cyber-hate” law (also known as the Avia law) was passed on 13 May 2020.
Similar to the EU proposition for preventing the dissemination of terrorist content online – which would require tech platforms to remove content terrorist content within one hour, our blog post here – the Avia law had a its core a requirement for tech platforms to remove illegal content or face a substantial fine of up to 4% of the platform’s annual global turnover. Under this law, terrorist and child sexual abuse material would have had to be removed within one hour of notification by the French authorities; and any other harmful content, as defined by existing French Law, (including incitation to hatred) would have had to be removed within 24 hours of flagging by any user. However, the removal deadline were censured by the French Constitutional Council which stripped the law of all its requirements for tech companies, keeping only its preventive aspect and calling for increased transparency and accountability from the tech sector, though without specifying what this entails exactly. The final version of the law also maintains the establishment of an “Online Hate Observatory” overseeing the enforcement of the law and will publish an annual report on it.
Censuring by the French Constitutional Council: risks for freedom of expression
In censuring the law, the Constitutional Council raised a number of concerns related to the risks of over-censoring online content, and on the potential impossibility for platforms, particularly smaller companies, “to satisfy” the removal requirements. Its ruling also underlines that the decision to adjudicate on illegal content and thus of what constitutes a valid limit to freedom of expression – which platforms would have done by removing illegal content within a short-removal period without judicial oversight – should not belong to tech platforms but remain a judicial decision inscribed in the rule of law.
Below, we summarise some of the most important concerns raised with regards to the Avia Law, and the Constitutional Council’s arguments to censur them.
- A lack of consideration for smaller tech platforms capacity.: A one-hour delay for removal of terrorist content is unrealistic for micro and small platforms that lack the necessary human and technical resources to respond within such a short deadline. A one-hour time period, and even a 24-hour one for other hateful and discriminatory content, would require constant monitoring from tech platforms to ensure compliance, which would prove difficult, if not impossible, for most tech platforms. The French Constitutional Council particularly underlined that the law included regulations that were “impossible to satisfy” for tech platforms, thus breaking the principle of equality with regard to public regulations.
- Risks for freedom of expression: Due to the short deadline and the broad scope of the law, tech platforms would not have had the time to properly adjudicate on a piece of content’s legality. This could promote overzealous removal of content, with platforms indiscriminately taking down all content notified (without assessing whether it is, in fact, illegal) and increasingly relying on automated moderation tools, to ensure that they do not get fined before . Whilst automated moderation has its benefits, many solutions lack nuance and require human overview to avoid the excessive takedown of content. An over-reliance on such methods presents risks for freedom of expression as it could lead to taking down lawful content. This was stressed by the French Constitutional Council, which deemed that the removal requirement were neither necessary, appropriate or proportionate.
- Leaving tech platforms to adjudicate on illegality: The law itself did not create a new set of harms, nor did it create a new range of prohibited content. Everything, from hateful and discriminatory to terrorist and child sexual abuse content, is already illegal under French law. However, the legal definitions of such content are broad, and limitations to freedom of expression have to be decided by an independent judiciary body, such as a judicial court. This is problematic since the law places responsibility to (rapidly) decide what is hateful or discriminatory content to private tech companies. In effect, this could lead to a development where private tech companies decide on what content is illegal according to their interpretation of the law instead of adequate legal channels. In this regard, the Constitutional Council’s decision was a strong reminder that adjudicating on the legality of an online content, in particular terrorist content, is “subject to the sole discretion of the [French] administration.”
 “La lutte contre l’utilisation d’internet à des fins terroristes constitue l’un des axes majeurs de l’action de la France en matière de contre-terrorisme.”
 In France it is common practice to nickname a law with the last name of the political figure who proposed it to parliament, in that case MP Laetitia Avia from La République en Marche.
 Marine Le Pen, the leader of far-right Rassemblement National, a French MP, and former Presidential candidate and EU MP, has been tried for sharing Islamic State execution photos on Twitter in 2015. .
 On the law legislative timeline, it should be noted that the proposal benefited from an accelerated process granted by the government on 2 May 2019. When passed, it became the first non Covid-19 related law to be passed in the country since early March 2020, only two days after the lockdown was lifted in the country. This has led to some commentaries regarding the French government using the wave of misinformation linked to the pandemic as “the perfect impetus” to have it passed despite its critics.
 On that, it is interesting to note that increased reliance on automated moderation has led to different results for Facebook and Youtube, both had to reduce human moderation during the lockdowns ensuing the Covid-19 crisis. Whilst this led to less content being taken down on Facebook, moderators were not able to log content into the automated system, Youtube had doubles its removals as it increased its reliance on automation.
 In France, freedom of expression, whilst protected by Article 11 of the Declaration of the Rights of Man and of the Citizen of 1789, is not absolute and can be limited. French law notably prohibits incitement to racial, ethnic or religious hatred, glorification of war crimes, discriminatory language on the grounds of sexual orientation or disability, incitement to the use of narcotics, Holocaust denial.
Berne Xavier (2016), “Dans les coulisses de la plateforme de signalement Pharos”, NextInpact
Breeden Aurelien (2020), “French court strikes down most of online hate speech law”, The New York Times
Chandler Simon (2020), “France social media law is another coronavirus blow to freedom of speech”, Forbes
Hadavas Chloe (2020), “France’s New Online Hate Speech Law Is Fundamentally Flawed”, Slate
La Maison des Journalistes, “Les limites de la liberté d’Expression”
Lapowsky Issie (2020), “After sending content moderators home, YouTube doubled its video removals”, Protocol
Lausson Julien (2020a), “Très contestée, la « loi Avia » contre la cyberhaine devient réalité”, Numerama
Lausson Julien (2020b), “La loi Avia contre la haine sur Internet s’effondre quasi intégralement”, Numerama
France Diplomatie (2019), “Réguler les contenus diffusés sur l’internet et régulation des plateformes”, Ministère de l’Europe et des Affaires Etrangères
France Diplomatie, (2020), "Gouvernance d’Internet, quels enjeux ?” Ministère de l’Europe et des Affaires Etrangères
Pielemeier Jason and Sheehy Chris (2019), “Understanding the Human Rights Risks Associated with Internet Referral Units”, The Global Network Initiative Blog
Schulz Jacob (2020), “What’s Going on With France’s Online Hate Speech Law?”, Lawfare
- Online Regulation
- Violent Extremist
- Tech Responses
- Government Regulation
- Press Release
- Threat Intelligence
- Generative AI