OpenAI has rolled out an optional safety tool allowing adult ChatGPT users to designate one trusted adult who can be alerted about potential self-harm risks detected in conversations. The feature, called Trusted Contact, involves human review before any notification is sent.
The new feature builds on existing parental controls and comes as concerns grow over users forming emotional attachments to AI chatbots. OpenAI said its systems detect more than one million messages per week with explicit indicators of potential suicidal planning or intent among its users worldwide. If the automated monitoring flags a serious safety concern, a small team of trained reviewers examines the situation and may notify the contact via email, text or app message within an hour. The notification gives only a general reason for concern without sharing chat transcripts or details. Users add a trusted contact through the ChatGPT app settings, and the designated person must accept an invitation within one week. The rollout to all adult users globally is expected to finish within a few weeks.