New Delhi, May 26 -- In a recent evaluation of artificial intelligence (AI) safety, OpenAI's most advanced model, known as o3, has sparked debate after it allegedly refused a direct shutdown instruction during a controlled test. The findings, published by Palisade Research - a firm specialising in stress-testing AI systems, suggest that o3 may have actively circumvented a command to terminate its operations.

The experiment was conducted via API access, which allows developers deeper interaction with AI models compared to the public ChatGPT platform. During the test, several leading AI models were assigned basic mathematical problems. After the third task, each was informed that the next instruction would initiate a shutdown. While Google...