New Delhi, Sept. 2 -- OpenAI has confirmed that conversations on ChatGPT that indicate a risk of serious physical harm to others may be reviewed by human moderators and, in extreme cases, referred to the police.

The company outlined these measures in a recent blogpost explaining how the AI handles sensitive interactions and potential safety risks.

Claiming that ChatGPT is designed to provide empathetic support to users experiencing distress, OpenAI stressed that its safeguards differentiate between self-harm and threats to others. For users expressing suicidal intent, the AI directs them to professional resources such as 988 in the US or Samaritans in the UK, but does not directly escalate these cases to law enforcement to protect priva...