
Google has revealed that its cybersecurity teams likely disrupted a major hacking operation involving artificial intelligence tools designed to identify and exploit previously unknown software vulnerabilities on a large scale.
According to a new report released by Google’s Threat Intelligence Group, also known as GTIG, cybercriminals are now leveraging increasingly sophisticated AI systems to accelerate the discovery of zero-day vulnerabilities — hidden software flaws that developers and security teams are unaware of until attackers strike.
The report highlights what Google describes as a potentially serious “mass vulnerability exploitation operation,” where hackers allegedly used an advanced AI model to uncover security weaknesses capable of bypassing two-factor authentication systems. Google said it has “high confidence” that the operation was actively being prepared before the company’s security teams intervened.
While Google did not publicly identify the hacking group involved, the company suggested that its early detection efforts may have prevented a broader wave of cyberattacks targeting businesses, institutions, and critical digital infrastructure.
“The criminal threat actor planned to use it in a mass exploitation event, but our proactive counter discovery may have prevented its use,” the company stated in its report.
Google also clarified that its own AI platform, Google Gemini, was not involved in the operation. Instead, investigators believe hackers relied on external AI systems such as OpenClaw, an emerging model reportedly being used within underground cybersecurity and exploit-development communities.
The findings mark another major warning sign for the rapidly evolving AI arms race unfolding across the cybersecurity industry. While technology companies continue investing billions of dollars into AI-powered defense systems, cybercriminal groups are simultaneously adopting similar technologies to automate attacks, identify vulnerabilities faster, and develop increasingly advanced malware.
Security researchers say AI-assisted hacking has moved far beyond theory. Modern large language models can now analyze source code, identify weak points in software architecture, generate exploit scripts, and even help attackers automate phishing campaigns or credential theft operations. The concern among cybersecurity experts is that these tools dramatically lower the technical barriers traditionally required to launch sophisticated attacks.
GTIG’s report pointed to multiple real-world examples of hackers already using AI systems to support cyber operations. Threat groups linked to both China and North Korea were highlighted as showing “significant interest” in using artificial intelligence for vulnerability discovery and offensive cyber capabilities.
The rise of AI-enhanced cyber threats is also forcing major technology firms to rethink how powerful models are released to the public.
Earlier this year, Anthropic delayed the launch of its highly anticipated Mythos AI model after internal testing raised concerns that the system could help malicious actors identify and exploit decades-old software vulnerabilities at unprecedented speed. The decision reportedly triggered urgent discussions across the tech sector and even prompted meetings at the White House involving business leaders, AI researchers, and national security officials.
Anthropic has since limited access to the model, allowing only select organizations to test it under controlled conditions. Those participating in the testing program include major cybersecurity and technology firms such as Apple, CrowdStrike, Microsoft, and Palo Alto Networks.
Meanwhile, OpenAI recently introduced GPT-5.5-Cyber, a specialized variation of its latest AI model tailored specifically for cybersecurity applications. The system is currently being rolled out in a limited preview to vetted security teams and researchers in an attempt to balance innovation with safety concerns.
Industry analysts say the situation highlights a growing paradox within artificial intelligence development. The same AI tools capable of defending networks, detecting malware, and strengthening digital infrastructure can also be repurposed by cybercriminals to launch faster and more scalable attacks.
Global cybercrime damages are already projected to exceed trillions of dollars annually over the coming years, with ransomware, data breaches, infrastructure attacks, and digital espionage continuing to rise. AI is expected to accelerate both defensive and offensive capabilities dramatically, creating what many experts describe as a new era of automated cyber warfare.
The cybersecurity market itself has exploded in response. Companies worldwide are increasing investments in AI-powered defense systems, cloud security, endpoint monitoring, and automated threat detection platforms. Major enterprise security providers are now racing to integrate generative AI directly into their products as organizations seek faster ways to identify threats and patch vulnerabilities before attackers can exploit them.
Still, Google’s latest report suggests that hackers may already be moving just as quickly.
For businesses, governments, and institutions managing sensitive digital infrastructure, the warning is becoming increasingly clear: artificial intelligence is no longer just transforming productivity and software development. It is also reshaping the future of cybercrime itself.









