India, Nov. 21 -- A new security concern has appeared in the fast-moving world of AI coding tools. Recent research shows that political or societal biases in large language models can affect not just text output but the quality and safety of the code they generate. The findings centre on DeepSeek-R1, a China-based AI model released in January 2025.
DeepSeek-R1 is a large language model with 671 billion parameters and was presented as a high-quality coding assistant developed at significantly lower cost than Western alternatives. Independent tests by CrowdStrike Counter Adversary Operations confirmed that the model can, in many cases, produce coding output comparable to other leading systems. However, the tests also uncovered a pattern th...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.