Anthropic Mythos cracks Apple's new M5 security in days, claim researchers

A cybersecurity startup called Calif used Anthropic's Claude Mythos AI model to develop a working exploit against Apple's M5 chip protections in under a week, bypassing the new Memory Integrity Enforcement (MIE) system. The researchers combined two bugs and multiple techniques to corrupt macOS memory, demonstrating AI-assisted vulnerabilities in Apple's hardware security despite the company's five-year development and billions in investment.
A Palo Alto-based cybersecurity startup named Calif claims it exploited Apple’s new M5 chip security protections in less than a week using Anthropic’s Claude Mythos AI model. The exploit bypassed Apple’s Memory Integrity Enforcement (MIE), a hardware-assisted system introduced last year to block memory corruption attacks, raising concerns about AI-powered cyber threats. Calif researchers discovered two bugs and combined them with existing techniques to corrupt macOS memory, gaining unauthorized system access. The team attributed the breakthrough to Mythos Preview’s ability to generalize attack methods across similar vulnerabilities, though human expertise was still required to navigate MIE’s novel defenses. The exploit was developed rapidly: researcher Bruce Dang found the bugs on April 25, and by May 1, the team had a functional attack. Calif emphasized that Mythos Preview’s strength lies in its ability to adapt known exploit patterns to new targets, though the MIE bypass required specialized human input. Anthropic restricted access to Mythos Preview after internal testing revealed its advanced vulnerability detection capabilities. The model was made available only to select companies and researchers under Project Glasswing, limiting public exposure. Calif disclosed the findings directly to Apple during an in-person meeting, avoiding the typical submission process. Apple had previously claimed MIE disrupted all known public exploits, including leaked kits like Coruna and Darksword. The new research suggests even AI-assisted attacks can overcome hardware-level security measures, forcing a reassessment of defense strategies. The incident highlights the growing role of AI in cybersecurity, where advanced models accelerate exploit development while human researchers refine targeting. Calif’s work underscores the need for adaptive security measures to counter evolving AI-driven threats.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.