OpenAI has introduced a new safety feature called “Trusted Contact,” designed to strengthen protections for users who may express or exhibit signs of self-harm risk during conversations with ChatGPT.
The feature allows adult users to voluntarily select and designate a trusted individual, such as a close family member, friend, or caregiver, who can be contacted in rare situations where the system detects serious safety concerns. OpenAI stated that its aim is to bridge the gap between digital interaction and real-world support for users who may be in crisis.
According to the company, Trusted Contact is activated only when automated monitoring systems detect potentially harmful content, and the case is escalated for review by trained human safety staff. If the reviewers determine that there is a credible and immediate risk of self-harm, a notification may be sent to the designated contact.
OpenAI emphasised that the system is designed with strict privacy limits. The alert sent to the trusted contact does not include chat transcripts or detailed conversation history. Instead, it provides a brief message indicating that the user may be experiencing a serious emotional or mental health concern and encourages the contact to check in directly.
The company noted that participation is entirely optional. Users must actively enable the feature, and they retain full control over updating or removing their chosen contact at any time. Additionally, the selected trusted contact must also accept the role before it becomes active, ensuring mutual consent.
OpenAI explained that the feature was developed in consultation with mental health professionals and safety researchers. It builds on existing safeguards in ChatGPT, which include detecting signs of distress, encouraging users to seek professional help, and limiting responses that could reinforce harmful behaviour.
The introduction of Trusted Contact reflects growing concern over the role of artificial intelligence systems in sensitive mental health interactions. As AI tools become more widely used for personal and emotional support, companies are under increasing pressure to ensure the responsible handling of crisis-related content.
OpenAI said the goal of the new safeguard is to encourage timely real-world intervention while preserving user autonomy and maintaining strong privacy protections. The company added that Trusted Contact is part of its broader effort to improve AI safety systems and ensure that technology supports human well-being in high-risk situations.
Senior Reporter/Editor
Bio: Ugochukwu is a freelance journalist and Editor at AIbase.ng, with a strong professional focus on investigative reporting. He holds a degree in Mass Communication and brings extensive experience in news gathering, reporting, and editorial writing. With over a decade of active engagement across diverse news outlets, he contributes in-depth analytical, practical, and expository articles exploring artificial intelligence and its real-world impact. His seasoned newsroom experience and well-established information networks provide AIbase.ng with credible, timely, and high-quality coverage of emerging AI developments.