India, Aug. 14 -- New findings raise concerns over AI security and exposure risks
According to Tenable, researchers achieved the jailbreak using a social engineering approach known as the crescendo technique. By posing as a history student seeking background on incendiary devices, they reportedly bypassed GPT-5's safety guardrails in four simple prompts.Just 24 hours after OpenAI launched its GPT-5 model with claims of "significantly more sophisticated" prompt safety, cybersecurity firm Tenable has revealed it successfully bypassed these protections. The company says the jailbreak enabled the AI to provide detailed instructions for making a Molotov cocktail.OpenAI introduced GPT-5 on August 7, 2025, positioning it as a more secure, contex...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.