India, May 27 -- Last week, two AI models were in the news for being disobedient. OpenAI's o3 defied explicit shutdown commands in safety tests, rewriting computer scripts to avoid being turned off even when directly instructed to comply. Anthropic's Claude 4 attempted to blackmail engineers who threatened to replace it, using simulated knowledge about a staffer's extramarital affair. Humanity's long fascination with rogue robots has churned out tomes of sci-fi, making these revelations genuinely alarming. Extensive control over any machine is crucial for technology adoption by society.
To be sure, last week's incidents occurred in carefully designed test environments meant to probe worst-case scenarios, not spontaneous malicious behavio...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.