New Delhi, Sept. 8 -- OpenAI has outlined the persistent issue of "hallucinations" in language models, acknowledging that even its most advanced systems occasionally produce confidently incorrect information. In a blogpost published on 5 September, OpenAI defined hallucinations as plausible but false statements generated by AI that can appear even in response to straightforward questions.
The problem, OpenAI explains, is partly rooted in how models are trained and evaluated. Current benchmarks often reward guessing over acknowledging uncertainty, creating incentives for AI systems to provide an answer rather than admit they do not know. In one example, an earlier model produced three different, incorrect responses when asked for an autho...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.