New Delhi, April 17 -- We've been looking for the wrong signs in the race for artificial general intelligence (AGI). Sure, we still fantasize about the day that AI will solve quantum gravity, out-compose Mozart or spontaneously develop a deep personal trauma from its 'childhood in the GPU.' But let's face it-human intelligence isn't about 'logic' or 'truth-seeking.' It's about confidently bluffing. And AI has nailed it. Let's talk about it some more.

Confident misinformation (hallucination) is a well-documented phenomenon in AI. Large language models (LLMs) produce extremely confident and detailed answers but often wrong ones. In AI terms, these are hallucinations. Analysts have estimated that AI chatbots like ChatGPT 'hallucinate' (or p...