Flashspoter - The giant technology company Google has for the first time revealed concrete evidence that artificial intelligence is being used to design cyber attacks. GTIG found a zero-day security flaw developed with the help of AI before it could be massively exploited. The findings break previous speculation that AI is still too primitive to be used in advanced cybercrime.
The loophole Google found is quite unique as it attacks a piece of lingering inconsistency in the two-factor authentication (2FA) logic. Unlike the usual technical loopholes, these errors come from trust assumptions encoded directly by the platform developers. AI-powered attackers are able to identify these weak points automatically without having to manually dabble.
Evidence of AI's involvement comes from an analysis of the Python scripts used in the exploit. Google researchers found typical indicators such as unreasonable CVSS scores, often referred to as AI hallucinations, as well as highly structured writing formats similar to large language model training data. These traits are difficult to generate by human hackers who are usually more messy at documenting code.
The target of the attack was an open source System Administration Tool whose name Google kept secret. The Platform is widely used by medium-sized technology companies to manage their infrastructure. Google has notified the developer in question, and they immediately patched the gap before any reports of victims.
What makes this case special is the scale plan of attack referred to as a "mass exploitation event" by GTIG. The perpetrators did not only intend to attack one victim, but took advantage of the loophole to break into thousands of systems at once. If successful, it could be one of the largest AI-facilitated cyberattacks in history.
Google insists that their own AI model, Gemini, was not involved in the development of this exploit. But the research team has high confidence that AI models from other developers are being used. This statement is important because it removes the impression that Google itself is unknowingly providing weapons to cybercriminals.
The perpetrators behind these attacks are not explicitly named by Google, but reports hint at the existence of actors linked to several countries. Those countries have previously shown great interest in leveraging AI for offensive cybersecurity purposes. However, Google decided not to disclose their exact identities publicly.
Google's discoveries are altering our perspective on AI threats. Most discussions in the public have been centred on the potential of AI to replace human jobs or to disseminate fake news. Now we have to accept the fact that AI is already an offensive tool in the cycle of professional cyberattacks.
What worries most is not the sophistication of the exploits themselves, but the speed with which AI finds loopholes that escape human scrutiny. A security researcher may spend weeks trying to find a logic bug like this. Thanks to AI, that work could be done in just a few hours.
The AI Model used in this case is likely a variant specially trained to read the source code and look for patterns of logic discrepancies. This is different from the usual generative AI models that only generate text. That is, we are already entering an era where cybersecurity-specific AI, both for defense and attack, is becoming an easily accessible commodity.
John Hultquist, chief analyst of GTIG, called this finding the "tip of the iceberg" in an interview with foreign media. Although it did not want to be quoted directly, the statement reflected internal concerns among security researchers. They speculate that many similar uptakes may already be occurring undetected.
On the optimistic side, Google's report also emphasizes that AI is a double-edged weapon. The same company uses AI to proactively detect threats, exactly as it did in this case. Without AI-based analysis tools, the zero-day loophole may only become known after thousands of systems have been hacked.
Google is not the only player in this arena. Anthropic and some other AI companies are using their own AI model, Mythos Preview, through Project Glasswing to find the high-level security gaps. This strategy is known as "offensive defense" whereby AI is leveraged to crack down on vulnerabilities ahead of the bad actors.
What makes the report A Plus is the recognition that AI not only helps attackers find loopholes, but also helps them hide traces. Python scripts in this exploit do not have the typical signs of amateur-made code. Instead, the structure and comments are as neat as the official documentation, a feature that makes manual detection much more difficult.
These results certainly serve as a wake-up call for the cybersecurity professionals, indicating that signature-based antivirus, which is the traditional detection method, is not enough. Going forward, defenses should be adaptive and AI-powered capable of recognizing anomalous patterns in code. Google itself has begun integrating this capability into their cloud services in response to similar threats.
This first case of a successfully stopped AI-based zero-day exploit is a landmark moment in cybersecurity history. Not because the attack is the most sophisticated, but because it is the first documented evidence that AI has come out of the laboratory and into the hands of professional "criminals". Going forward, the arms race between offensive and defensive AI will determine who rules the digital space.
Source:
The Verge – “Google says it stopped an AI-powered zero-day exploit for the first time”
Engadget – “Google announces its first-ever discovery of a zero-day exploit made with AI”
