News

Press Release: Urgent Call for Platform Accountability Following Widespread Distribution of Charlie Kirk Assassination Video

Written by Tech Against Terrorism | Sep 12, 2025 8:04:35 PM

Tech Against Terrorism condemns content moderation failures as graphic footage spreads unchecked across social media.  

London, 12 September 2025 - Tech Against Terrorism calls for action from social media platforms and regulators following the catastrophic failure of content moderation systems in response to the assassination of conservative activist Charlie Kirk on September 10, 2025.

Graphic videos depicting Kirk's murder at Utah Valley University have proliferated across major social media platforms, with footage accumulating millions of views within hours of the incident. The widespread circulation has highlighted critical gaps in content moderation infrastructure at a time of heightened political tensions and violence. This represents not merely a policy failure but a fundamental breakdown in platforms' duty of care to their users, particularly to children and vulnerable individuals. 

Our analysis reveals that major platforms have systematically failed to:

  • Remove graphic assassination footage in a timely manner
  • Apply existing age-restriction policies consistently
  • Provide adequate content warnings to users
  • Prevent algorithmic amplification of violent content 

While major social media platforms have policies in place requiring age restrictions and warning labels for violent or graphic content, particularly imagery involving violent death, enforcement has been inconsistent. Videos of Kirk's assassination should, under these guidelines, be subject to immediate content warnings and age gating; yet, many remain widely accessible without restriction. 

The proliferation of graphic and violent content without adequate age verification poses severe risks to children and vulnerable users. Unexpected exposure to such disturbing material can be deeply traumatic, particularly for young users who lack the psychological frameworks to process extreme violence or graphic imagery. Research consistently demonstrates that exposure to violent media content is associated with acute stress symptoms and post-traumatic stress disorders. The failure of platforms to implement adequate protective measures has created an environment where vulnerable users can encounter traumatising material without warning or appropriate safeguards.

The Online Safety Act in the UK and the European Union's Digital Services Act provide regulatory frameworks requiring platforms to address harmful content systematically. However, this incident has demonstrated that current enforcement mechanisms remain insufficient to prevent the rapid dissemination of graphic violence during critical moments.  

Adam Hadley, Executive Director of Tech Against Terrorism, stated:

“The widespread circulation of Charlie Kirk's assassination video across social media platforms represents a catastrophic failure of content moderation at a time when our societies can least afford it. These graphic videos violate the terms of service of many social media platforms, yet they continue to proliferate unchecked, where the absence of meaningful content moderation has created a digital wild west.

This isn't merely about policy violations - it's about public safety. Research consistently demonstrates that exposure to graphic violence can inspire copycat attacks and further radicalisation. When platforms allow assassination footage to spread virally, they become accelerants for political violence at an already volatile moment in our democracies. The reckless abandonment of content moderation responsibilities by certain platforms is particularly dangerous given the current geopolitical climate. Nation-state adversaries are undoubtedly monitoring this situation, ready to exploit these unmoderated spaces to amplify division and potentially incite further violence. We've seen this playbook before - hostile actors weaponising tragedy to destabilise democratic societies. This demands immediate action on two fronts: platforms must enforce their own policies and remove this content immediately, and regulators must step in with urgency to ensure platforms cannot continue operating as ungoverned spaces where violence is commodified for engagement. The Online Safety Act here in the UK and the Digital Services Act in Europe provide frameworks - they must be enforced robustly and immediately. Our democracies depend on it."  

 

Tech Against Terrorism urges all social media platforms to: 

  • Immediately Strengthen Content Detection Systems: Deploy enhanced automated and human moderation capabilities specifically designed to identify and remove graphic violent content before it achieves viral distribution.
  • Enforce Existing Policies Consistently: Apply community guidelines and terms of service uniformly across platforms, ensuring that content violating policies on graphic violence is removed promptly and age-gated
  • Invest in Human Moderation: Reverse recent trends towards reduced human oversight and restore adequate content moderation teams capable of handling sensitive and complex content decisions.
  • Implement Proactive Monitoring: Establish systems and prepare for high-impact violent events, ensuring rapid response capabilities during critical periods. 

     

Tech Against Terrorism stands ready to support platforms and governments in implementing robust measures to prevent the exploitation of digital spaces by those seeking to spread violence and terror.