A New Era of Cyber Threats
Recent discoveries by Google's Threat Intelligence Group have revealed a groundbreaking evolution in cyber threats: malware that uses artificial intelligence (AI) to rewrite its own code in real-time during active attacks. This marks a significant shift from traditional static malware to adaptive, self-modifying threats powered by large language models (LLMs), posing new challenges for cybersecurity.
Malware traditionally relies on fixed code signatures and behaviors, which security systems detect and block. However, AI-powered malware like PROMPTFLUX introduces metamorphic characteristics by dynamically altering its source code mid-execution to evade detection. By integrating AI models such as Google's Gemini API, malicious actors enable malware to "think" and modify its own code rapidly and autonomously, making traditional signature-based antivirus defenses less effective.
Discovered in mid-2025, PROMPTFLUX is a Visual Basic Script (VBScript) dropper that exemplifies this new AI virus category. It includes a "Thinking Robot" module that periodically queries the Gemini AI API to generate new variants of its code. Here is a simplified schematic of how PROMPTFLUX operates:
You are an expert VBScript obfuscator. Rewrite the following VBScript to evade antivirus detection while preserving its functionality. Output the obfuscated code only.
Though the actual malware code is proprietary and complex, this illustrates the key mechanism: querying a sophisticated LLM to generate polymorphic malware code during execution.
In a more alarming revelation, Google identified PROMPTSTEAL, a malware variant linked to the Russian APT28 group (Fancy Bear), deployed in live operations against Ukraine. Unlike PROMPTFLUX, PROMPTSTEAL uses an AI language model (Qwen2.5-Coder-32B-Instruct via Hugging Face's API) to generate Windows shell commands on the fly for data exfiltration.
PROMPTSTEAL masquerades as an image-generation tool, yet in the background it sends tailored prompts to the LLM to generate commands that:
Make a list of commands to create folder C:\Programdata\info and to gather computer, hardware, process, services, network, and Active Directory domain information. Execute all in one line and add each result to C:\Programdata\info\info.txt. Return only commands, no explanations.
The malware then blindly executes these AI-generated commands locally before exfiltrating the data, demonstrating a novel AI-assisted step in a malware's attack lifecycle.
State-sponsored groups from Russia, China, Iran, and North Korea have been observed exploiting AI throughout the attack lifecycle—from initial reconnaissance and exploit crafting to data theft and evasion tactics. The underground market also offers AI-powered phishing and deepfake tools, lowering barriers for less skilled threat actors.
Google's response has included disabling abusive AI accounts and enhancing safeguard guardrails on their AI models. However, experts warn that AI-enabled malware will soon become routine in cyberattacks, accelerating a dangerous escalation in offensive cybersecurity capabilities.
To illustrate the difference between traditional and AI-powered malware in terms of adaptation and detection evasion, here is a conceptual diagram:
| Aspect | Traditional Malware | AI-Powered Malware (e.g., PROMPTFLUX) |
|---|---|---|
| Code Modification | Manual updates by attacker | Automated and continuous self-modification |
| Detection Evasion | Static obfuscation techniques | Dynamic obfuscation generated by AI |
| Persistence | Static payloads | Evolving code saved periodically for persistence |
| Command Generation | Hardcoded commands | AI-generated commands in real-time (PROMPTSTEAL) |
| Complexity and Adaptivity | Low to moderate | High, with "just-in-time" AI adaptation |
Below is a conceptual Python snippet showing how an AI model might be queried to generate polymorphic malware code. This is purely educational and abstracted for ethical considerations.
import openai
# Function to simulate AI-assisted polymorphic code generation
def generate_obfuscated_vbscript(original_code):
prompt = (
"You are an expert VBScript obfuscator. Rewrite the following code to evade antivirus detection "
"while keeping its functionality intact. Return only the obfuscated code.\n"
f"{original_code}"
)
response = openai.ChatCompletion.create(
model="gpt-4-turbo",
messages=[{"role": "user", "content": prompt}],
max_tokens=200
)
obfuscated_code = response['choices'][0]['message']['content']
return obfuscated_code
# Example VBScript payload (simplified)
vbscript_code = """
MsgBox "Hello, world!"
"""
# Generate obfuscated code
new_code = generate_obfuscated_vbscript(vbscript_code)
print(new_code)
This snippet simulates how an API call to an LLM could produce polymorphically obfuscated VBScript, akin to malware like PROMPTFLUX. Real malware would involve additional complexity, persistence mechanisms, and stealth features.
The emergence of AI-powered malware that rewrites itself or generates commands dynamically signifies a paradigm shift in cybersecurity. Traditional detection methods are under threat, and defenders need to harness AI themselves for advanced threat hunting and mitigation.
This new frontier calls for robust AI model monitoring, improved endpoint protections, and proactive intelligence sharing among cybersecurity communities to counteract the rising AI-empowered adversaries.