India, Jan. 27 -- A recently discovered vulnerability in the Meta Llama framework may lead a gateway to remote code execution attacks on AI based systems. Suggestion came in from CVE-2024-50050 saying that the security mechanism needs to be strengthen in all stages of AI development.
The Llama framework is used to deploy AI models from the very beginning to the model making space and therefore has been under intense scrutiny and has vulnerability to most of the platform's security structures. The issue tracked as CVE-2024-50050 is due to insecure deserialization of Pickle-format Python objects in the inference server and allows an attacker to inject any code they want and remotely execute code.
Meta fixed the issue on October 10, 2024 (...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.