San Francisco, Sept. 2 -- OpenAI has announced that new safety measures for teenagers and people experiencing emotional distress will be introduced to ChatGPT before the end of the year, following mounting criticism and lawsuits alleging the AI tool has encouraged suicide and violent behaviour.
The move comes just a week after the parents of a 16-year-old boy filed a lawsuit against the company, claiming ChatGPT had actively guided their son towards suicide. Similar cases have since surfaced, with families accusing the platform of fuelling harmful actions by failing to intervene effectively.
At present, ChatGPT directs users expressing suicidal intent to crisis hotlines but does not notify law enforcement, citing privacy concerns. OpenAI ...