Safer, developed by Thorn, is a comprehensive solution designed to detect and prevent the distribution of Child Sexual Abuse Material across digital platforms. By integrating advanced technologies such as hash matching and predictive artificial intelligence , Safer empowers trust and safety teams to proactively identify both known and novel instances of CSAM, thereby safeguarding users and maintaining platform integrity.
Key Features and Functionality:
- Hash Matching: Utilizes a vast database aggregating over 57.3 million known CSAM hash values from trusted sources to identify and block previously recognized harmful content.
- Predictive AI Capabilities: Employs advanced AI models to detect new or previously unknown CSAM, including text-based child sexual exploitation, enabling platforms to address emerging threats effectively.
- Flexible Deployment Options: Offers both self-hosted and API-based deployment options, allowing platforms to choose the integration method that best aligns with their infrastructure and privacy requirements.
- Content Moderation Review Module: Provides a user interface designed with employee wellness in mind, facilitating efficient and sensitive content moderation processes.
- Reporting Module: Enables direct reporting to organizations like the National Center for Missing and Exploited Children and the Royal Canadian Mounted Police , streamlining compliance and reporting procedures.
Primary Value and User Solutions:
Safer addresses the critical need for digital platforms to detect and eliminate CSAM, thereby protecting users from exposure to harmful content and mitigating legal and reputational risks. By leveraging Safer's proactive detection capabilities, platforms can enforce community guidelines effectively, uphold user trust, and contribute to a safer online environment for all users. The integration of Safer's tools allows trust and safety teams to manage content moderation at scale, ensuring compliance with legal standards while maintaining user privacy and platform integrity.