New Delhi, Dec. 1 -- In the last few months, OpenAI has been under fire for its chatbot giving harmful answers to users in a bid to level up its usage. For its part, OpenAI has also implemented a few safeguards in the AI like parental controls, age filtering, reminders to take a break and distress recognition.
However, a new research by King's College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian says that the AI chatbot still fails to indetify risky behaviour when communicating with mentally ill people.
The researchers also note that ChatGPT based on GPT-5 is providing dangerous and unhelpful advice to people experiencing mental health crises.
In order to check out the ability of ...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.