India, May 9 -- As generative AI technologies become more sophisticated, the danger of safety and reliability issues from multimodal AI models is becoming more evident. A new report from Enkrypt AI, a leading provider of AI safety and compliance solutions, lays out in stark relief the serious dangers that can be used to undermine the integrity of AI models.

An extensive amount of red teaming was fundamental in developing claims of safety protocol from the Enkrypt AI report, which included some serious safety protocol deficiencies that could allow for multimodal exploitable gaps that could lead to enterprise liability, public safety and threats to vulnerable populations.

Multimodal AI models accept both textual and visual input to create...