India, March 10 -- A new study has claimed that OpenAI's artificial intelligence chatbot ChatGPT can experience "stress" and "anxiety" when given disturbing information and this can lead to the chatbot giving a biased answer to prompts.

A study from the University of Zurich and the University Hospital of Psychiatry Zurich said that the chatbot also responds to mindfulness-based exercises after being provided calming imagery by users.

The study states that ChatGPT can experience "anxiety" when given violent prompts which can lead to the chatbot appearing moody towards its users and even giving repsonses that can show racist or sexist biases.

However, they said that the "anxiety" can be calmed if the chatbot receives mindfulness exercise...