Mentorship
The Offer
We will assess your company’s existing counterterrorism and content moderation policies and enforcement mechanisms and identify strengths and areas of improvement.
Tech Against Terrorism's counterterrorism and tech policy experts will also assess your company’s transparency reports and recommend potential improvement areas.
Your company will also be invited to sign the Tech Against Terrorism Pledge to state your commitment to human rights. Once the Mentorship programme is complete and the requirements are met, companies will awarded the Tech Against Terrorism Trustmark.
In-Depth Review
As a first step in the mentorship process, we conduct an in-depth review of your platform’s content standards, highlighting areas of strength and outlining where there is room for improvement.
Threat Intelligence
You will receive a bespoke threat intelligence assessment outlining current risks to your platform. We will also help your platform get onboarded to the Terrorist Content Analytics Platform.
Knowledge Sharing
Mentees have privileged access to our knowledge-sharing proågrammes, including the Knowledge Sharing Platform and e-learning webinars.
Trustmark
The Tech Against Terrorism Trustmark is an accreditation awarded to tech companies that share Tech Against Terrorism's commitment to disrupting terrorist activity online while respecting human rights.
The Trustmark is awarded on the completion of the Mentorship Program to certify that tech platforms meet Tech Against Terrorism's standards.
Trustmark Standards
Tech Against Terrorism assesses Trustmark applicants on their compliance with our standards in three areas of counterterrorist activity. Full information on how prospective Trustmark recipients can meet these standards is available on application.
Understanding the threat
Identifying their platform’s systemic and specific vulnerabilities to TVE and the actual risk of TVE exploitation; maintaining a content moderation capacity which is proportionate to risk, without entertaining conclusive presumptions about any particular category of user
Disrupting terrorist activity
Clearly advertising red lines, giving accessible conceptual definitions of prohibited conduct, with reference to creditable designation lists where appropriate, ensuring that content moderation decisions are fair and appealable, balancing automated and human review
Strengthening the response
Publishing accessible transparency reports on content moderation activity, engaging with the wider Trust and Safety community, contributing to and supporting the development of technical solutions
“For smallest platforms with limited resources it is crucial to be time-efficient in all of their actions. Creating a first transparency report may be much easier when done with help from experienced specialists. TaT was able to provide answers to all questions regarding what should be included in such a report having in mind platform-specific requirements and how to present them to the users in a clear way. Platform can then focus on actual numbers extraction and content of the transparency report, without doing extensive research first.”
Find out more
Contact us for further information