Artificial Intelligence

ChatGPT adds alerts for people suffering from a mental health emergency

North America / United States0 views1 min
ChatGPT adds alerts for people suffering from a mental health emergency

OpenAI’s ChatGPT now includes an opt-in Trusted Contact feature that alerts nominated friends or family if a user discusses self-harm or suicide, following data showing 1.3 million weekly users expressed such risks. The system combines AI monitoring with human review to connect users with real-world support during crises, building on existing safety protocols like helpline referrals.

OpenAI has introduced a new safety feature for ChatGPT called Trusted Contact, allowing users to designate a friend or family member who will receive alerts if the AI detects discussions of self-harm or suicide. The system uses automated monitoring to flag concerning behavior, followed by a review by trained personnel before notifying the trusted contact. This update follows research revealing that 0.15% of ChatGPT’s 900 million weekly users—approximately 1.3 million people—expressed risks of self-harm or suicide, while 0.07% showed signs of psychosis or mania-related emergencies. The feature aims to leverage social connections as a protective measure during emotional distress, as highlighted by Dr. Arthur Evans, CEO of the American Psychological Association. ChatGPT will also continue providing information about local crisis helplines when users appear to be in distress. Dr. Munmun De Choudhury, a professor at Georgia Tech, praised the initiative as a step toward human empowerment in vulnerable moments. Trusted Contact builds on existing safety controls, which include directing users to helplines when crisis-related conversations are detected. OpenAI’s move reflects broader concerns about AI tools potentially exacerbating mental health challenges, though the company emphasizes the feature’s role in fostering real-world support networks. The system requires users to opt in and designate a trusted contact, ensuring autonomy while providing a safety net. If a user’s conversation raises red flags, the AI reviews the context before alerting the contact, balancing privacy with intervention. This marks one of the first major AI platforms to integrate proactive mental health support into its design.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...