Non-coders are building ransomware with AI

People who can't code are now selling working ransomware.

Non-coders are building ransomware with AI
Photo Credit: Unsplash/Towfiqu barbhuiya

AI is being used to carry out sophisticated cyber-attacks. What's most alarming isn't that skilled hackers are becoming more efficient, but that people with minimal technical knowledge are suddenly capable of creating professional-grade malware.

When I saw reports of cybercriminals misusing AI earlier, I assumed it was shallow news reporting. I just took a look at Anthropic's full report, and it gave me goosebumps. Here are three examples that struck me.

Advanced malware development

You know what ransomware is. Some might be aware that most cybercriminals simply purchase off-the-shelf ransomware kits to use. But did you know you could use AI to create these ransomware kits?

This is the story of a UK-based cybercriminal who used Claude to develop, market, and distribute ransomware. Not just any ransomware, but one incorporating anti-EDR techniques, modern ChaCha20 encryption, advanced Windows exploitation, and command and control (C&C) infrastructure.

They're selling it for US$400 to US$1,200 depending on whether it's DLL-only, a complete kit with C&C tools, or native binaries for data encryption.

The most disturbing aspect? This hacker apparently can't even implement complex technical components or troubleshoot without using AI. They're not a skilled programmer who's using AI to work faster. They're someone with minimal technical skills who's using AI to become dangerous.

Hacking at scale

Another cybercriminal used Claude Code to do data extortion at scale. By leveraging its unique code execution environment, as many as 17 organisations could have been compromised.

The operation included automated reconnaissance, vulnerability exploitation, lateral movement, and data exfiltration. What struck me was how this hacker used advanced "context engineering" strategies with prompts formatted in markdown for persistent context.

This is an AI technique that's one step beyond prompt engineering that I wrote about last week.

No-code malware development

One Russian-speaking developer even turned to AI to create no-code malware, leveraging the Claude chat interface, API calls, and Claude Code. This hacker is presumably technical and leveraged this knowhow to incorporate advanced evasion capabilities in the resulting malware.

Some applications and techniques witnessed included creating a Telegram bot for ransomware command and control, building data exfiltration tools using screenshots, disguising malware as legitimate applications, and using dynamic API calls to evade antimalware tools.

The barrier has collapsed

Various cybersecurity experts I've spoken to over the last 12 months had repeatedly warned about how AI lowers the barrier of entry for hackers. Well, Anthropic's report shows this is no longer a theoretical threat.

The examples aren't about master hackers becoming more efficient. They're about people with limited technical skills suddenly capable of creating sophisticated attack tools. When someone who can't debug their own code can create functional ransomware, we've entered a new era of cyber threats.

The report makes uncomfortable reading for anyone in cybersecurity. Traditional defences assume attackers need certain skills and resources. What happens when those assumptions no longer hold?

You can read the full report here (PDF).