New Delhi, Sept. 7 -- GPT-5 has significantly fewer hallucinations, especially when reasoning, but they still occur, said OpenAI in its research paper themed 'Why language models hallucinate.''Hallucination' in relation to AI tools refer to the situation when an AI model produces outputs that are inaccurate, misleading, or entirely fabricated, despite often presenting them with high confidence.
Sharing insights on hallucination, the OpenAI research paper said "Hallucinations persist partly because current evaluation methods set the wrong incentives. While evaluations themselves do not directly cause hallucinations, most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.""Think abo...