New Delhi, Nov. 30 -- With the rise of AI chatbots, there has also been a growing risk of the misuse of this powerful technology. As a result, AI companies have been putting guardrails on their large language models in order to stop the AI chatbots from giving inappropriate or harmful answers. However, it is well known by now that there are various ways to circumvent these guardrails using a technique called jailbreaking.

However, new research has found that there is a deeper, systematic weakness in these models that can allow attackers to sidestep safety mechanisms and extract harmful answers from them.

As per researchers from the Italy based Icaro Lab, converting harmful requests into poetry can act as a "universal single turn jailbre...