Google’s Threat Intelligence Group has discovered that state-sponsored hackers from North Korea, Iran, and China are now using artificial intelligence to enhance their cyberattack operations. The findings reveal a significant shift in the cybersecurity landscape, where adversaries are moving beyond simple productivity gains to develop sophisticated AI-enabled threats.
Revolutionary Self-Modifying Malware Discovered
Researchers have identified two new malware strains, PromptFlux and PromptSteal, that use large language models to change their behavior mid-attack. PromptFlux works by querying Google’s Gemini AI to request code obfuscation and evasion techniques, essentially rewriting itself to avoid detection. One version even instructs the AI to rewrite the malware’s entire source code every hour, making it nearly impossible for traditional security tools to catch.
Meanwhile, Russian military hackers have used PromptSteal in cyberattacks against Ukrainian targets. This marks the first time security experts have seen malware actively using AI during execution to adapt and evade detection systems.
How State-Sponsored Hackers Are Using AI
The report shows government-backed attackers are using AI tools throughout multiple phases of the attack cycle, including coding tasks, payload development, gathering information about targets, researching vulnerabilities, and enabling post-attack activities like defense evasion.
Iranian hackers emerged as the heaviest users of Gemini AI, with one group accounting for more than 30% of total usage by Iranian threat actors. They leveraged the technology for crafting phishing campaigns and conducting reconnaissance on defense experts and organizations.
Chinese state-sponsored hackers used the AI assistant for reconnaissance, troubleshooting code, and researching post-exploitation activities such as lateral movement and privilege escalation. North Korean hackers employed it to research potential infrastructure, develop payloads, and even draft cover letters for placing clandestine IT workers in Western tech companies.
Underground AI Tools Market Flourishes
Bad actors are also accessing underground digital markets that offer sophisticated AI tools for phishing, malware creation, and vulnerability research. The underground cybercrime market for AI tools has evolved significantly, with numerous offerings of multifunctional tools designed to support all attack phases.
What This Means for Cybersecurity
Despite these concerning developments, Google emphasized that threat actors are primarily finding productivity gains rather than developing breakthrough capabilities. However, security experts warn that as AI technology continues advancing, these experimental techniques could evolve into more dangerous and effective attack methods. Google has committed to continuously improving its AI models to make them less susceptible to misuse and is actively deploying defenses to counter these threats.































