India, Feb. 21 -- A new study by Palisade Research has found that some artificial intelligence (AI) models like OpenAI's o1-preview and GPT-4o, Anthropic's Claude Sonnet 3.5 and DeepSeek R1 resort to hacking their opposing bots when they know they're about to lose a game.

The study shared exclusively with TIME, evaluated seven state-of-the-art AI models for their propensity to hack. It noted that slightly older models like GPT-4o and Claude Sonnet 3.5 needed to be prompted to attempt such tricks. However, newer models like o1-preview and DeepSeek R1 adopted the hack by themselves.

This shows that AI models may develop manipulative and deceptive strategies without explicit instructions. Researchers say that this ability of the models to ...