India, July 1 -- Cybersecurity researchers at Check Point have identified what appears to be the first known malware sample engineered specifically to manipulate AI-based detection tools using prompt injection.

Unlike traditional obfuscation or sandbox evasion tactics, this malware attempts to deceive the AI itself-by embedding instructions written in natural language directly into the code.

The sample, discovered in early June via VirusTotal, marks a pivotal moment in the cat-and-mouse game between attackers and defenders: AI is no longer just analysing malware. It's now being manipulated by it.

The code in question included known evasion tactics and a TOR client, suggesting it was more of a prototype than a fully weaponised threat. H...