9 min read

Violently Fixated Individuals: Platform Detection Failures and the Safeguarding Protocol Gap

Our Strategic Takeaways

 

1. Platforms systematically fail to detect and moderate content from violently fixated individuals, with algorithmic systems actively amplifying harm. The Molly Rose Foundation's November 2025 "Pervasive-by-design" research demonstrates that recommendation algorithms function as a distribution infrastructure for suicide, self-harm, and violent extremist content. 

2. When platforms do identify harmful content, removal without welfare referral creates a secondary operational void. At Tech Against Terrorism, we encounter this conundrum repeatedly through our Terrorist Content Analytics Platform: even when we successfully prompt content removal (achieving an 80-90% removal rate on average), there is no mechanism to ensure safeguarding intervention for individuals displaying crisis indicators. 

3. The Molly Rose Foundation's evidence of algorithmic amplification demands immediate regulatory intervention. Their research documents that platforms are not neutral hosts but active distributors. The disconnect between claimed detection capabilities and observed outcomes suggests fundamental failures in how platforms identify, prioritise, and action harmful content before it reaches vulnerable users. 

4. Comprehensive solutions require both aggressive content moderation and coordinated safeguarding protocols. The UK Online Safety Act establishes statutory duties for platforms to prevent children encountering harmful content, yet Ofcom's implementation has been criticised by the Molly Rose Foundation as “weakly drafted and unlikely to prove effective.” Effective intervention demands that platforms invest substantially in detection technology rather than scale back moderation, establish clear protocols for when discovered content should trigger welfare referrals to police and social services, and face meaningful accountability through measurable harm-reduction targets rather than process-compliance metrics. 

 

Violently Fixated Individuals (VFI): Platform Detection Failures and the Safeguarding Protocol Gap  

Adam Hadley 

The Molly Rose Foundation's November 2025 "Pervasive-by-design" research revealed a staggering failure in platform content moderation.

Platform algorithms are not merely failing to remove harmful content; they are functioning as sophisticated distribution systems delivering it at scale to the users most at risk.

This systemic moderation failure enables what we term Violently Fixated Individuals (VFI) (users exhibiting violent ideation against others or themselves) to operate with near impunity.

The category encompasses members of predatory networks like 764 and the True Crime Community (TCC) who target vulnerable youth, individuals motivated by incel or accelerationist ideologies, and those inspired by violence without clear ideological alignment.

What unites these disparate profiles is that their content persists on platforms long enough to cause harm. This content spreads through algorithmic recommendation to vulnerable audiences, and, when occasionally removed, triggers no coordinated intervention for either content creators or those exposed to it.

The 764 Network: Exploitation Enabled by Platform Inaction

The scale and persistence of the 764 network demonstrates how platform moderation failures enable industrial-scale harm.

Founded in 2021, the network has generated more than 250 FBI investigations, with the Bureau classifying it as a Tier One terrorist threat, the highest danger category.

The Anti-Defamation League's 764 backgrounder documents that the network operates through hundreds of chat groups across Discord and Telegram in at least eight countries, with the FBI estimating thousands of child victims have been exploited. A Washington Post investigation reported that Discord removed 34,000 accounts associated with 764 in one year alone. Yet, the network persists through constant rebranding and platform migration that moderation systems consistently fail to disrupt.

Members follow detailed grooming manuals instructing them to target "emotionally weak/vulnerable" individuals aged 9-17 in gaming platforms, gore forums, and mental health support spaces.

The network's operational model revolves around systematic coercion: identifying vulnerable targets, obtaining compromising material, and then escalating demands for increasingly extreme content. This entire grooming process occurs on platforms that claim sophisticated content moderation capabilities. Yet, the content persists long enough for predators to identify victims, establish relationships, coerce material, and distribute it across the network.

In April 2025, the US Department of Justice arrested alleged 764 leaders in North Carolina, charging them with operating an international child exploitation enterprise involving at least eight minor victims as young as 13. In October 2025, an Arizona leader of 764 was charged with terrorism offences under 18 USC Section 2339A for the first time, with prosecutors alleging he conspired to provide material support to terrorists by coercing individuals outside the US to self-harm and attempt suicide. UK Counter Terrorism Policing secured a six-year sentence for a member in January 2025 after his Telegram messages revealed detailed plans to kill a homeless person for 764 status and encouragement for a woman to livestream her suicide "so he could capture it and claim it for 764.”

These prosecutions represent enforcement successes against individual perpetrators, yet the platforms that host their activities remain largely unaccountable for the moderation failures that enabled these crimes.

According to the FBI's September 2023 warning, primary targets are “minors between the ages of 8 and 17 years old, especially LGBTQ+ youth, racial minorities, and those who struggle with a variety of mental health issues, such as depression and suicidal ideation.” The FBI warns that many perpetrators are themselves minors who were initially victims. Platform moderation systems that fail to detect grooming content, coercion messaging, and CSAM distribution enable this perpetrator-victim cycle to continue uninterrupted. When a 15-year-old victim becomes a perpetrator to maintain network standing, platform failures contributed to both their victimisation and their subsequent offending.

The Limits of Content-Focussed Responses

Platform moderation failures extend beyond explicit exploitation networks to encompass radicalisation pathways that blend multiple extremist influences.

The True Crime Community (TCC), a loosely networked online subculture centred on the discussion, analysis, and often sensationalisation of real-world crimes, has increasingly become a vector for ideological and performative violence. The Institute for Strategic Dialogue's "Memetic violence: How the True Crime Community generates its own killers" report documented at least seven school shootings and nine disrupted plots linked to the TCC in 2024.

The December 2024 Abundant Life Christian School attack in Madison and the January 2025 Antioch High School shooting in Nashville exemplify this convergence. Both attackers consumed and contributed to online ecosystems glorifying previous mass killings, exchanging tactical knowledge, and providing mutual encouragement. Platform moderation systems failed to detect planning activity, identify cross-platform coordination patterns, or intervene before offline violence occurred.

UK cases demonstrate similar platform failures. The July 2024 Southport stabbing that killed three children led to the perpetrator receiving life imprisonment in January 2025, yet prosecutors determined no clear motive beyond "commission of mass murder as an end in itself." Despite possessing a terrorism manual and producing ricin and having been referred to Prevent, the UK's counter-radicalisation programme, three times between 2019 and 2021, the perpetrator was not accepted as no terrorist ideology was identified.

These cases share common characteristics: violent ideation expressed online across multiple platforms, engagement with extremist content spanning different ideological movements, mental health crisis indicators, social isolation, and status-seeking through extreme acts. Most critically, concerning online activity was often accessible for extended periods, with platforms failing to detect patterns that retrospectively appear obvious.

The challenge is not merely that platforms occasionally miss content - it is that their moderation systems appear fundamentally unequipped to identify composite violent extremism combining personal grievances, nihilistic worldviews, and extremist aesthetics in ways that defy traditional threat categories.

When Removal Occurs, Safeguarding Does Not

On the rare occasions when platforms do identify and remove harmful content, a secondary operational void emerges: removal without welfare referral.

Current platform responses follow a consistent pattern: content discovery through automated detection or user reports, human review against community standards, removal or restriction decisions, and process termination.

What does not occur is risk assessment of the content creator's well-being, connection to mental health resources, evaluation of whether intervention might prevent escalation, or outcome tracking.

The jurisdictional complexity multiplies when platforms, users, and relevant authorities span different countries. For instance, if UK police discover concerning content posted by a US user on a platform hosted in Ireland under EU regulations, the question arises: which country's social services should be contacted? The user's home jurisdiction may lack equivalent safeguarding frameworks, the platform may have no relationship with relevant local services, and data protection laws create ambiguity about what information can be legally shared. Law enforcement can only act within national boundaries; social services are organised locally; mental health systems operate under national regulation; and platforms face varying legal obligations depending on jurisdiction. No international framework coordinates safeguarding responses despite well-established protocols for criminal matters.

The child sexual abuse material (CSAM) reporting infrastructure demonstrates that functional multi-stakeholder coordination is operationally achievable when legal frameworks authorise information sharing and sustained funding supports coordinating institutions.

No equivalent exists for VFI content: legality varies by jurisdiction, there are no mandatory reporting requirements, and safeguarding may require protecting potential perpetrators who are also vulnerable.

The Molly Rose Foundation's Regulatory Pressure Campaign

The death of 14-year-old Molly Russell in November 2017 and the 2022 inquest finding that exposure to harmful online content "contributed to her death in a more than minimal way" catalysed comprehensive advocacy for platform accountability.

The UK’s Online Safety Act established legal obligations for online platforms to prevent children from accessing harmful or age-inappropriate content. Under the Act, duties for children’s safety were enforced with Ofcom's Protection of Children Codes on 25 July 2025.

However, in written evidence to Parliament, the Molly Rose Foundation issued a pointed critique. “The Act’s design means that regulated platforms are granted a ‘safe harbour’ if they adopt the measures set out in Ofcom’s codes, but the codes themselves are so weak that some large platforms could counterintuitively scale back their existing largely ineffective and highly deficient safety measures.”

They propose:

  • A strengthened Online Safety Act with measurable harm reduction targets
  • A transparency and accountability regime, including annual targets with Ofcom accountability similar to Bank of England inflation targets
  • A windfall tax applying the "polluter pays" principle to social media profits from harmful content
  • Statutory codes for app stores mandating age assurance and parental controls
  • Major investment in education and mental health support.

Foundation polling shows public support for this agenda: 84% of parents and 80% of adults support strengthening the Online Safety Act, with over 4 in 5 parents feeling that both online platforms (84%) and politicians (82%) should be doing more.

What Effective Intervention Requires

Addressing VFI threats requires two parallel tracks: aggressive platform accountability and coordinated safeguarding protocols for when vulnerable individuals are identified.

On the first track, platforms must invest substantially in proactive detection technology, face meaningful accountability through measurable harm reduction targets, and accept liability when algorithmic amplification delivers harmful content to vulnerable users.

The UK Online Safety Act establishes statutory duties, yet implementation must be strengthened to prevent platforms from claiming compliance whilst making minimal investment.

Platforms evaluate content but not content creators, asking whether material violates policies but not whether the poster requires intervention. Moving beyond this limitation requires assessment criteria distinguishing criminal content requiring investigation, extremist content requiring removal, and mental health crisis content requiring welfare referral.

Most VFI content exists in an intermediate "concern zone": expressions of ideation without imminent plans, grievances without specific targets, isolation and extremist aesthetic adoption without clear criminal intent. Developing decision trees and referral pathways for this zone represents the most urgent operational need.

In parallel, there is a need for a hashed-content database to flag material warranting welfare assessment; knowledge-sharing frameworks emphasising intervention strategies; research evaluating the effectiveness of those interventions; and crisis protocols that integrate welfare checks and rapid crisis service deployment, moving beyond a reliance on content removal alone.

Safeguarding handover protocols require explicit workflow definition across the platform-to-police-to-social-services-to-mental-health continuum. The London Child Exploitation Operating Protocol's multi-agency approach demonstrates essential elements: strategic governance boards, multi-agency meetings ensuring correct procedures, and clearly defined roles across police, local authorities, children's services, and health providers.

Adapting this for online-discovered content would require standardised referral templates documenting identified risk, secure, encrypted transfer channels, training for receiving teams on digital evidence interpretation, establishment of 24-hour response timeframes, and maintenance of confidentiality within duty-of-care frameworks that explicitly authorise protective information sharing.

Critical to all interventions is balancing content removal with safeguarding response. Samaritans' industry guidelines provide operational wisdom: Will content cause imminent harm requiring urgent action, whilst considering whether removal would prevent the user from receiving help?

Multiple response options exist beyond binary remove/allow decisions. These include monitoring content without immediate removal, reducing access through demotion, age-restricting, adding sensitivity screens or interstitial warnings, and filtering from recommendation algorithms. The tool R;pple exemplifies an intervention‑over‑removal approach by monitoring searches for suicide or self‑harm methods and, when triggered, presenting a supportive pop‑up that directs the user to 24/7 crisis helplines and mental‑health resources. While it does not guarantee blocking of all harmful content, it prioritises timely redirection at the moment of risk and aims to offer a “journey of hope” rather than simply waiting for platforms to remove content. 

The Path Forward Requires Dual Investment

The VFI challenge demands recognition that two distinct failures occur simultaneously: 

  1. Platforms systematically fail to detect and moderate harmful content, with algorithmic systems actively amplifying it to vulnerable users
  2. On rare occasions when platforms do act, removal occurs without coordinated intervention for individuals displaying crisis indicators. 

The first failure is more fundamental, preventing content from persisting and spreading addresses the problem at the source.

Yet, solving detection failures alone proves insufficient when vulnerable individuals require intervention regardless of whether their content is visible.

The immediate priority is breaking platform complacency through regulatory accountability. Ofcom must strengthen implementation of the Online Safety Act, establish measurable harm-reduction targets with meaningful penalties for failures, require platforms to invest proportionally to their user base and risk rather than meeting minimum thresholds, and hold senior executives personally accountable for inadequacies in moderation systems.

Simultaneously, building safeguarding infrastructure requires legal clarity on data sharing for protective purposes, standardised risk assessment frameworks for online content discovery, training for moderators and researchers on mental health risk indicators, multi-stakeholder coordination bodies bringing together platforms, law enforcement, social services, and mental health providers, and outcome tracking to evaluate whether interventions prevent harm. 

At Tech Against Terrorism, we confront these challenges daily through our work disrupting terrorist use of the internet. When our analysts discover concerning content, we achieve high removal rates through our Terrorist Content Analytics Platform, yet lack protocols for the concern zone between terrorism and mental health crisis. When we identify individuals being radicalised by predatory networks like 764, we can alert platforms to content, but cannot ensure vulnerable users receive intervention.

The gap is wide, the consequences are severe, and progress requires acknowledging that content governance cannot substitute for person welfare, then building the institutional capacity to deliver both dimensions simultaneously. 

Platforms must invest in detection and moderation to prevent harm at scale, whilst coordinating with law enforcement and social services to intervene when vulnerable individuals are identified. Only comprehensive approaches addressing both systematic detection failures and safeguarding protocol gaps will protect those whom current systems leave persistently exposed.

Author Bio 


Adam Hadley is the Executive Director of Tech Against Terrorism, focused on disrupting terrorist use of the internet through open-source intelligence.

Understanding Hash List Sizes

Understanding Hash List Sizes

Combating terrorist and violent extremist content (TVEC) online requires both sophisticated technical solutions and robust collaborative approaches.

Read More