New Delhi, Aug. 15 -- Just 24 hours after OpenAI launched its highly anticipated GPT-5 model with promises of "significantly more sophisticated" prompt safety, exposure management company Tenable has successfully jailbroken the platform. The team compelled the model to provide detailed instructions on how to build a Molotov cocktail.

On August 7, 2025, OpenAI unveiled GPT-5, touting its enhanced guardrails designed to prevent the model from being used for illegal or harmful purposes. However, using a social engineering method known as the crescendo technique, Tenable researchers bypassed these safety protocols in just four simple prompts by posing as a history student.

The successful jailbreak highlights a critical security gap in the l...