Cybersecurity

Google Says Criminal Hackers Used A.I. to Find a Major Software Flaw

North America / United States0 views1 min
Google Says Criminal Hackers Used A.I. to Find a Major Software Flaw

Google revealed that criminal hackers used artificial intelligence to discover and attempt to exploit a previously unknown zero-day vulnerability in a popular web-based system administration tool, marking the first known case of AI-assisted cyberattacks. The flaw, detected by Google Threat Intelligence Group, could have bypassed two-factor authentication but required valid credentials to succeed, prompting urgent patching before potential damage.

Google’s Threat Intelligence Group has identified the first confirmed case of hackers using AI to uncover a previously unknown software vulnerability. The attack targeted a popular open-source, web-based system administration tool, where the flaw could have allowed bypassing two-factor authentication if paired with valid credentials. Google stated it had high confidence that an AI model was used to discover and weaponize the zero-day vulnerability, which was patched before the attack could succeed. The tech giant did not specify the exact timing, targets, or AI platform involved but confirmed it was not its own Gemini chatbot. The discovery underscores growing concerns about AI’s role in cybersecurity, as advanced models like Anthropic’s Mythos—released last month—have demonstrated the ability to identify thousands of zero-day flaws across major operating systems and browsers. Mythos was restricted to select U.S. and British firms and agencies due to its potential risks. This incident follows a 2025 report where state-sponsored Chinese hackers used AI to infiltrate systems of 30 companies and government agencies worldwide. The latest attack, executed by ‘prominent cybercrime threat actors’ via a Python script, highlights the escalating threat of AI-powered cybercrime. Google notified the affected software maker promptly, enabling a patch before further exploitation. Security experts warn this is only the beginning, with AI models rapidly changing the cybersecurity landscape. The case reflects broader industry and government debates on regulating AI, particularly as tools become more capable of identifying and exploiting vulnerabilities autonomously. Google’s findings signal a shift from theoretical fears to real-world threats in digital security.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...