Threat actors, including those with ties to North Korea, are using AI-enabled malware that rewrites itself in real time to target cryptocurrency users, according to a warning issued this week by Google.
“Threat actors associated with the Democratic People’s Republic of Korea (DPRK) continue to misuse generative AI tools to support operations across the stages of the attack lifecycle, aligned with their efforts to target cryptocurrency and provide financial support to the regime,” Google Threat Intelligence Group wrote in a recent report.
AI-powered malware poses new risks to crypto users
Google has tracked at least five distinct malware families that can “dynamically generate malicious scripts, obfuscate their own code to evade detection,” using large language models such as Gemini and Qwen2.5-Coder during execution.
AI-enabled malware is the new frontier in cyberattacks and presents a major escalation from previous approaches, where malicious functions were typically hardcoded directly into the malware itself.
The new malware strain can essentially rewrite and adapt its code on the go, thereby making it significantly harder to detect and mitigate using traditional security tools.
Google specifically highlighted two malware families, PROMPTFLUX and PROMPTSTEAL, which integrate large language models directly into their operations to regenerate code, evade antivirus software, and execute system-level commands in real time.
PROMPTFLUX is an experimental dropper that uses Gemini’s API to continually rewrite its VBScript code, allowing it to refresh its obfuscation tactics and slip past security tools.
While PROMPTSTEAL, a data miner, leverages the Qwen model hosted on Hugging Face to generate Windows commands on demand for collecting files and system information.
PROMPTSTEAL has been directly associated with Russia’s APT28 group and has already been deployed in live operations.
Crypto users are also at risk as the North Korea–linked group UNC1069, also known as Masan, has been using Gemini “to research cryptocurrency concepts, and perform research and reconnaissance related to the location of users’ cryptocurrency wallet application data.”
According to Google, the group went further by crafting multilingual phishing messages and attempting to develop code that impersonated software updates in order to steal credentials and extract digital assets.
Threat actors, including DPRK-linked attackers, have also used AI-powered tools to generate deepfake images and videos impersonating individuals in the cryptocurrency industry as part of social engineering campaigns aimed at distributing malware and gaining access to target systems.
Google said it had already disabled the accounts tied to these activities, but risks still remain as attackers can use AI to generate bespoke exfiltration scripts, phishing lures, and system commands that could target crypto platforms and their users with far greater precision than before.
Past attempts to target crypto users using malware
Since the inception of the crypto industry, attackers have used various creative attack vectors to exploit vulnerabilities in platforms, users, and infrastructure.
Last month, in a separate report, Google identified another malware strain dubbed EtherHiding that North Korea-linked attackers were pushing through blockchain smart contracts on Ethereum and BNB Smart Chain to covertly deliver malicious payloads.
Earlier this year, Kaspersky flagged another large-scale malware operation that abused the SourceForge software platform to distribute crypto-targeting malware disguised as fake Microsoft Office add-ons and managed to infiltrate over 4,600 devices, mostly in Russia.
The post Google warns of AI-powered malware targeting crypto users appeared first on Invezz