Legacy paradigms and AI-related cybercrime

Fran Gomez (@ffranz), Head of Threat Assessment Research
January 2026.

Legacy paradigms are simply not enough when it comes to AI-related cybercrimes.

Google’s 2025 AI Threat Tracker highlights increasing dangers from adaptive malware like PROMPTFLUX, which uses LLMs during runtime to avoid detection. This enables state actors and cybercriminals to enlarge polymorphic attacks across more target surfaces. Traditional defenses cannot keep up with these evolving threats, leaving organizations vulnerable to reconnaissance, phishing, and data exfiltration through shadow IT, forgotten cloud assets, and supply chain vulnerabilities. Potentially costing millions in breaches.

  • AI is transforming cybercrime into a new era of adaptive, real-time threats.
  • Comprehensive, AI-native defenses are essential to keep pace.
  • Strategic AI integration can turn defense into a force multiplier.

Share


A paradigm shift

Google’s 2025 AI Threat Tracker report shines a light on a significant shift in cybersecurity adversaries are now harnessing AI not just for support but also to craft clever, adaptive malware that changes during attacks. This move from simple threats to smart, real-time attacks means organizations need to rethink their defense strategies.

Reading this new publication from Google Threat Intelligence Group’s AI Threat Tracker, dropped in November 2025, suggests that something will change in the near future. Threat actors aren’t just exploring a little with AI anymore; they’re weaponizing it in wild new ways.

We’re talking malware families like PROMPTFLUX, PROMPTSTEAL, FRUITSHELL, and PROMPTLOCK that query Large Language Models like Gemini or Qwen2.5-Coder during runtime to whip up malicious scripts, obfuscate code on the fly, and even generate functions just-in-time.

No more static payloads that AV signatures can sniff out—these attacks dynamically rewrite themselves mid-execution, evading detection by mimicking legitimate behavior and exploiting prompt injections to bypass AI safety guardrails.

State-sponsored groups from North Korea, Iran, Russia, and China are leading this shift, embedding AI across the attack lifecycle—from AI-driven reconnaissance and hyper-personalized phishing lures to custom command-and-control scripts and stealthy data exfiltration that blends into regular network activity. Meanwhile, underground cybercrime markets are booming with AI-powered kits for vulnerability research and social engineering, lowering the skills needed for sophisticated attacks and enabling scale like never before.

This isn’t just an upgrade — it’s a fundamental paradigm shift, with 2025 marking the start of autonomous, adaptive threats that can pivot and evolve in real time. Traditional defenses relying on static indicators or simple behavioral rules struggle to keep up with AI that can mimic enterprise workflows and secretly manipulate models to execute malicious commands.

Let’s imagine an APT deploying malware that’s inert until it calls out to an LLM API to custom-generate exploits tailored to the victim’s environment. The “just-in-time” AI approach means these threats are polymorphic and self-healing, making detection and response exponentially harder. Google’s report highlights this as a new operational phase of AI abuse, turning cyber threats into dynamic, goal-driven entities far beyond yesterday’s malware.


EASM platforms

So, what does this mean for mitigation? Basically, legacy tools that are designed for static malware analysis and signature detection are often less effective in today's fast-changing landscape. Organizations must pivot to an AI-native defense strategy that matches the attackers’ speed and adaptability. This includes continuous External Attack Surface Management (EASM) to discover and monitor all internet-facing assets, shadow IT, forgotten cloud instances, exposed APIs, and third-party endpoints that AI-driven malware could probe for runtime exploits, combined with cybersecurity ratings and an attacker-perspective scoring of your external risk posture.

EASM platforms offer detailed footprint mapping (crawling domains, IPs, certificates, and subdomains to simulate AI-driven reconnaissance) and incorporate ratings data for risk-based prioritization, emphasizing exposures with the highest exploitability linked to business criticality. Additionally, supply chain monitoring through EASM enhances visibility into vendor surfaces that could transmit AI threats inward, with ratings dashboards tracking score improvements after fixes to justify budgets and demonstrate resilience to boards, insurers, and partners.

Organizations must adopt an AI-aware, proactive security posture and conduct supply chain audits, as AI software and APIs are main access points. Since attackers focus on black-market LLMs, collaboration to share threat indicators is crucial to counter AI-driven threats.

For CISOs and CIOs, this is a call to action to elevate security architectures beyond legacy paradigms.



For more info on EASM by LEET contact us






All you need is LEET!

Get our newsletter usingthis link