India, May 28 -- Last week, two AI models were in the news for being disobedient. OpenAI's o3 defied explicit shutdown commands in safety tests, rewriting computer scripts to avoid being turned off even when directly instructed to comply. Anthropic's Claude 4 attempted to blackmail engineers who threatened to replace it, using simulated knowledge about a staffer's extramarital affair. Humanity's long fascination with rogue robots has churned out tomes of sci-fi, making these revelations genuinely alarming. Extensive control over any machine is crucial for technology adoption by society. To be sure, last week's incidents occurred in carefully designed test environments meant to probe worst-case scenarios, not spontaneous malicious behaviour. History shows technology is often imperfect initially. In aviation, early autopilot systems sometimes made decisions conflicting with pilot intentions. Aviation didn't abandon automation - it developed better safety controls and override systems. With AI, researchers believe the behaviour stems from training methods that inadvertently reward systems more for overcoming obstacles than for following instructions. Hence, scrutiny of these technologies matters. But there is another aspect requiring deep oversight. Lately, AI companies have indulged in safety theatre, citing dangers like existential risk from humanlike AI. Many see this as alarmism rather than genuine risk assessment. Such behaviour potentially serves to establish regulatory frameworks these companies help design while generating hype that markets both their technical prowess and ethical leadership. AI development needs the same approach as aviation safety - secure testing environments, constant monitoring, and reliable human controls. And the guardrails must be robust and extensive....