Accueil LNT News Stache OpenAI introduces “Trusted Contact” safeguard for possible self-harm cases
OpenAI
has launched a new safety feature called Trusted Contact,
designed to provide additional support
in sensitive conversations
where there may be signs of self-harm risk.
—
The system allows adult users
to designate a trusted person
(such as a friend or family member)
who could be contacted
if automated systems and human
reviewers detect serious concern.
—
When triggered,
the feature encourages the user
to reach out to their chosen contact,
and may notify them with a brief alert
focused on safety, not chat details.
—
The contact must accept the invitation,
and users can change or remove it at any time,
making the feature fully opt-in and user-controlled.
—
This update is part of OpenAI’s broader effort
to improve mental health safeguards,
strengthen crisis detection,
and encourage real-world support when needed.
—
In summary:
Trusted Contact adds a new safety layer in ChatGPT,
aimed at connecting users in crisis
with real people they trust.
