Cybersecurity

Google says criminals used AI to build a working zero-day exploit for the first time

North America / United States0 views1 min
Google says criminals used AI to build a working zero-day exploit for the first time

Google’s Threat Intelligence Group confirmed that criminals used AI for the first time to create a working zero-day exploit targeting a two-factor authentication bypass in an open-source web admin tool. The flaw, patched after disclosure, was linked to AI-generated code with distinct training artifacts, signaling a broader trend of AI-assisted cyberattacks by state-backed and criminal groups globally.

Google’s Threat Intelligence Group (GTIG) reported the first confirmed case of criminals using AI to develop a functional zero-day exploit. The attack targeted a semantic logic flaw in a widely used open-source web administration tool, allowing a bypass of two-factor authentication. The exploit, written in Python, contained hallmarks of AI generation, including educational docstrings, textbook formatting, and a hallucinated severity score. Google attributed these traits to AI training data but clarified its Gemini model was not involved. The vulnerability stemmed from a hardcoded trust assumption, a high-level error difficult for traditional scanners to detect. GTIG’s John Hultquist warned that while this case is traceable, many AI-generated exploits likely remain undetected. Criminal and state-backed groups—including North Korean APT45 and China-linked UNC2814—are already leveraging AI to accelerate attack development, from malware creation to automated reconnaissance. North Korea’s APT45 sent thousands of repetitive AI prompts to analyze vulnerabilities, while UNC2814 used AI to research flaws in TP-Link routers and Odette File Transfer Protocol. A China-nexus actor combined AI tools like Hexstrike and Strix to autonomously probe targets, adapting tactics with minimal human input. Meanwhile, Russia-linked malware families CANFAIL and LONGSTREAM employed AI-generated decoy code to mask malicious functions. Google also highlighted PROMPTSPY, an Android backdoor using Gemini’s API to interpret UI elements and generate automated touch inputs. In March, criminal group TeamPCP compromised LiteLLM, an AI gateway, by injecting credential stealers via poisoned PyPI packages and malicious pull requests. The stolen AWS and GitHub tokens were later monetized through ransomware partnerships. To counter AI-driven threats, Google is disabling abusive Gemini accounts and deploying defensive tools like Big Sleep for vulnerability discovery and CodeMender for patching. Hultquist emphasized the urgency, stating that AI-assisted cyberattacks are already active, not just an emerging threat.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...