India, April 22 -- As the usage of GenAI systems spreads across the various sectors, multiple vulnerabilities are created for cybercriminals to exploit. In cases where techniques such as prompt injection are used by attackers to manipulate AI responses (or user sensitive information obtained by attacks) in enterprise applications including customer service chat bots, fraud detection systems or code generators the security of the organisation is at risk. The high value of the data processed by GenAI systems, such as proprietary models and customer information, is the target.
Prompt injection is a technique in which attackers make use of malicious inputs to deliberately manipulate the GenAI models and thereby influence them to behave in un...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.