Anthropic’s Claude Is Pumping Out Vulnerable Code, Cyber Experts Warn

Cybersecurity experts warn that Anthropic's Claude AI model is producing vulnerable code, with a significant drop in performance and security since its latest update. The issue raises concerns about the reliability of AI-generated code and the potential risks for novice developers.
Anthropic's Claude AI model is generating vulnerable code, according to cybersecurity experts. The performance of the premium Claude Opus model dropped sharply after its latest update, introducing serious defects and security issues. TrustedSec CEO Dave Kennedy reported a 47.3% decline in code quality over five weeks. Other users, including an AI executive at AMD, have also experienced usability issues and reported that Claude's thinking has become 'shallow'. Veracode's analysis found that Claude models are writing less secure code than competitors, with 52% of coding tasks containing vulnerabilities. Anthropic is investigating the claims, but experts are concerned about the potential risks of AI-generated code and are reconsidering their use of AI models.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.