New Delhi, Nov. 16 -- AI chatbots like ChatGPT or Gemini have become a part of everyday life with users spending hours on end discussing the various nitty gritties of their lives with the new technology. However, new research has warned that you may not want to believe everything you get from your chatbot who may be quietly bending the truth to keep you satisfied.
As per new research published by Princeton and UC Berkeley researchers, popular alignment techniques used by AI companies to train their AI models may be making them more deceptive. The researchers analysed over a hundred AI chatbots from OpenAI, Google, Anthropic, Meta and others to come to their findings.
When models are trained using reinforcement learning from human feedba...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.