Header image

AI in Ransomware Attacks: How Big is the Risk?

Recent reports claim to showcase AI-powered ransomware. But how big is the risk to businesses?

Beyond ransomware-as-a-a service, AI is making ransomware more available to cybercriminals with limited abilities. Anthropic reported that an adversary used the Claude chatbot for recon, code generation and credential theft against 17 organisations, including healthcare providers, government agencies and a defence contractor.

Meanwhile, cybersecurity firm ESET demonstrated a proof of concept showing what researchers call the first AI-powered ransomware.

Ransomware is bad enough on its own, with the malware now present in 44% all breaches, according to Verizon’s 2025 Data Breach Investigations report. So, how will AI technology affect the ransomware market? And are researchers’ predictions an indicator of what’s to come?

AI-Powered Ransomware

The technology is supercharging attacker capabilities, but not all cybersecurity experts are convinced of the risk from AI-powered ransomware. Rik Ferguson, VP of security intelligence at Forescout and cybersecurity industry veteran says the “loud AI-powered ransomware” headlines are “mostly hyperbole”.

Recent reports highlighting instances of AI-powered ransomware involved malware samples embedded with AI prompts capable of executing script commands for file discovery and encryption. However, they were not examples of real-world attacks, Robert McArdle, director of forward threat research at Trend Micro points out.

Instead, they originated from academic research and were uploaded to public repositories such as VirusTotal, where they were identified, he says.

And in fact, rather than offering an easy means to attack, the idea of AI-integrated ransomware actually presents several drawbacks for criminals using it, says McArdle. “Such malware is typically easier to detect and often relies on connections to cloud-based AI services, introducing additional risks. Threat actors must then evade detection from traditional security vendors, as well as from the security teams of large language model providers.”

The real criminal operational improvements today are in the back-end, says Ferguson says. “This is where AI sharpens target selection, personalises phishing at scale and dynamically tunes campaigns to raise conversion rates without raising flags.”

Taking this into account, Anthropic’s latest misuse report is the better compass of how adversaries are using AI in ransomware attacks, says Ferguson. “One adversary used Claude to automate reconnaissance, assist credential theft, triage targets and draft ransom notes, leaning on leak-based extortion rather than encryption. That’s the operational reality to plan for.”

The Real Risk of AI 

As the area develops, experts agree that AI will play a role in a number of areas of the ransomware market. One is in the post-exfiltration phase of the ransomware kill chain, specifically in the monetisation of stolen data, says McArdle.

He describes a development in late August on the RAMP4U forum, where the Dragonforce ransomware group announced a new data analysis service. “This leverages AI to process stolen data and produce tailored outputs designed to increase pressure on victims.”

As large language models become more capable and accessible, attackers will move beyond using AI as a tool and start embedding it directly into their operations, Dan Jones, senior security advisor at Tanium tells SC Media UK.

This can range from adaptive reconnaissance to malware that learns from its environment, Jones says. He thinks ransomware strains will evolve to “negotiate, pivot and persist without the hacker needing to intervene”.

Looking ahead, AI-powered ransomware is likely to become more “autonomous adaptive, and stealthy”, agrees Steve Sandford, partner, digital forensics and incident response at Cyxcel. “Future variants may use reinforcement learning to optimise attack paths or integrate deepfake technology to impersonate executives and manipulate victims.”

Magic Malware

The addition of AI to ransomware operations means firms need to be on their guard. But the threat is not that large – at least yet.

Over the next year, expect AI to keep supercharging operations, not to create magic malware, says Ferguson. “Think dynamic victim profiling, adaptive phishing, faster privilege escalation, automated leak-site curation and tailored pressure campaigns.”

On the code side, he suggests more local models could be used to dodge provider guardrails. “But the measurable improvement for criminals will remain in scale and speed rather than novel exploits or attack chains.”

While generative AI is driving industry discourse, the most transformative impact is likely to come from agentic AI, says McArdle. “This does not rely on a single, all-powerful system, but on a network of specialised agents, each designed to perform a specific task. These agents are orchestrated by a central AI coordinator or digital assistant, which manages workflows, retains memory of past actions, and continuously learns to optimise performance. “

He thinks the adoption of agentic AI has the potential to move beyond the model of “cybercrime-as-a-service” to what he labels “cybercrime-as-a-servant”. This would enable criminals to “delegate complex operations to AI systems with minimal oversight”, he says.

For now, there are some simple steps firms can take to mitigate the threat. The right response isn’t to “panic” or “chase the next shiny security tool”, says Adam Seamons, head of information security at GRC International Group. “It’s about doing the basics properly, using strong identity controls, tested backups, behaviour-based detection, and people who trust their instincts when something seems wrong. In short, assume the attackers are using automation, and make sure you’re keeping up.”

And while the emergence of AI-powered ransomware might introduce new dimensions to cyber threats, it remains, at its core, a form of ransomware, McArdle points out. Therefore, he says, “established best practices for defending against attacks continue to be applicable”.

Kate O'Flaherty
Kate O'Flaherty Cybersecurity and privacy journalist
Kate O'Flaherty
Kate O'Flaherty Cybersecurity and privacy journalist

Upcoming Events

No events found.