India, July 14 -- AI therapy chatbots are gaining attention as tools for mental health support, but a new study from Stanford University warns of serious risks in their current use. Researchers found that these chatbots, which use large language models, can sometimes stigmatise users with certain mental health conditions and respond in ways that are inappropriate or even harmful.
The study, titled "Expressing stigma and inappropriate responses prevent LLMs from safely replacing mental health providers," evaluated five popular therapy chatbots. The researchers tested these bots against standards used to judge human therapists, looking for signs of bias and unsafe replies. Their findings will be presented at the ACM Conference on Fairness,...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.