How Google may have confirmed Anthropic’s Mythos fears that sent shock waves across banks and financial institutions

Google’s Threat Intelligence Group (GTIG) confirmed AI-driven hackers used a Large Language Model (LLM) to discover and exploit a previously unknown zero-day flaw capable of bypassing two-factor authentication, raising alarms about AI-powered cyber threats. The revelation aligns with Anthropic’s recent warnings about its Mythos model, prompting restricted access to a closed group of partners and urgent White House discussions on potential risks to critical infrastructure.
Google’s Threat Intelligence Group (GTIG) disclosed that a criminal group used artificial intelligence to identify and weaponize a previously unknown zero-day vulnerability, allowing attackers to bypass two-factor authentication (2FA). The flaw, discovered through a Large Language Model (LLM), posed a significant risk to banks and financial institutions, though Google’s proactive measures disrupted the planned mass exploitation. The company stated it had 'high confidence' in the threat but did not attribute the attack to a specific group or confirm involvement of its Gemini model. Anthropic’s recent warnings about its Mythos model align with Google’s findings, as the AI startup delayed its release due to concerns about the model’s ability to exploit decades-old vulnerabilities in critical infrastructure. The White House convened emergency meetings following Anthropic’s disclosure, leading to the model being restricted to a select group of partners, including JPMorgan Chase, Apple, and CrowdStrike, under Project Glasswing. Google’s report highlights a growing trend where state-sponsored groups linked to China and North Korea are increasingly using AI to accelerate malware development and cyberattacks. Unlike traditional hacking methods, AI-driven exploits enable attackers to operate at unprecedented speed, increasing the risk of data extortion and ransomware attacks before security patches can be implemented. The incident underscores the urgent need for improved cybersecurity measures to counter AI-powered threats. Experts, including John Hultquist of Google’s threat intelligence team, warn that the era of AI-driven vulnerability exploitation has already begun, demanding immediate attention from governments, financial institutions, and tech companies. While Google disrupted the attack, the discovery serves as a stark reminder of the evolving threat landscape. The use of AI in cyber warfare and criminal hacking signals a shift toward faster, more sophisticated attack methods, requiring continuous innovation in defense strategies.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.