Blog

The New Battlefield: How State Hackers Are Weaponizing Gemini AI

Published February 19th, 2026 by Bayonseo

In the hands of state-sponsored hackers, artificial intelligence has evolved from a tool for defense to a weapon. The advanced persistent threat (APT) groups from North Korea, China, Iran, and other countries are aggressively incorporating generative AI models like Gemini into every stage of their cyber operations, from automated exploit development to initial reconnaissance, according to a recent report from Google's Threat Intelligence Group (GTIG).

The results demonstrate that adversaries are not just experimenting with AI but are actively employing it to improve the speed, caliber, and scope of their attacks, signaling a dramatic uptick in the arms race between AI and cybercrime.


Reconnaissance and Target Profiling at Machine Speed

One of the most concerning use cases involves using AI to supercharge the earliest stages of an attack. Google observed the North Korea-linked group UNC2970 (also known as the Lazarus Group) using Gemini to conduct open-source intelligence (OSINT) and profile high-value targets. The hackers used the AI to synthesize information on major cybersecurity and defense companies, mapping specific technical job roles and even salary information. This allows them to craft highly convincing, tailored phishing personas—such as fake corporate recruiters—to identify soft targets for initial compromise.

The distinction between malevolent reconnaissance and standard professional research is blurred by this activity. Attackers now have a serious advantage in campaign planning since tasks that once took a human analyst days or weeks can now be completed in minutes.


Beyond Research: AI-Powered Code and Exploits

The misuse goes well beyond the realm of study. Gemini has been used by several state-sponsored organizations to aggressively hone their hacking skills:

  • The AI was utilized by APT41 (China) to debug and troubleshoot exploit code by gleaning explanations from the documentation of open-source tools.
  • Gemini was used by UNC795 (China) to investigate and create unique web shells and vulnerability scanners for PHP servers.
  • APT42 (Iran) studied proof-of-concept attacks for known vulnerabilities such as CVE-2025-8088, created a SIM card management system in Rust, and wrote code for a Python-based Google Maps scraper.

In arguably the most clever turn of events, Google discovered a family of malware known as HONESTCUE that makes advantage of Gemini's API in a totally unique manner. The malware calls the API to create new functionality on the fly rather than only employing AI to write its code. Gemini responds by sending it C# source code, which it then builds and runs in memory. The malware is extremely difficult to detect with conventional techniques thanks to this fileless, self-updating method that leaves no evidence on disk.


Model Extraction: When AI Itself Is the Target

It is not a one-sided menace. The AI models themselves are also being targeted by attackers. Google thwarted a massive model extraction assault in which hackers methodically queried Gemini using more than 100,000 prompts in non-English languages in an effort to mimic its fundamental reasoning capabilities. In order to essentially steal the intellectual property incorporated into the AI's reactions, the objective is to create a replacement model that mimics the target's behavior.

One researcher on security said, "Behavior is the model." Defenders are forced to reconsider what "protecting the model" actually means because every query-response combination could serve as a replica's training example.


Defending in the AI-Powered Era

These changes demonstrate how AI is tipping the scales in favor of attackers who embrace it quickly. They are leveraging it to develop more quickly than ever before, automate time-consuming tasks, and get around language obstacles. The message is obvious for defenders: using AI to combat AI is no longer an option. Defense tactics need to change to incorporate behavioral analysis, AI-powered surveillance, and a thorough comprehension of how these capabilities might be abused.

We at Bayon Technologies Group assist businesses in navigating this brand-new, intricate danger environment. We make sure your company is ready for the next wave of cyberthreats by deploying AI-powered defensive solutions and offering security awareness training that covers AI-generated phishing and social engineering. Join us in creating a strong defense against the AI era.


‹ Back