You're offline - Playing from downloaded podcasts
Back to All Episodes
Podcast Episode

AI-Powered Ransomware PromptLock Signals New Era of Autonomous Cyber Threats

January 15, 2026

Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.

The cybersecurity industry has entered what researchers call the post-malware era, marked by the emergence of AI-native ransomware capable of rewriting its own code in real-time. Security experts warn that threats like PromptLock have rendered traditional detection methods obsolete, forcing major vendors to abandon human-assisted copilot models in favour of fully autonomous security agents.

What is PromptLock

PromptLock represents the first widely documented ransomware that leverages large language models to generate entirely new malicious scripts for every execution. First identified by ESET researchers in August 2025, the malware operates a locally accessible AI language model through the Ollama API, enabling it to assess target systems, identify installed security software, and craft bespoke payloads specifically designed to evade detected defences.

The malware was later revealed to be a proof of concept created by researchers at New York University, who refer to the project as Ransomware 3.0 in their academic paper. While the discovered samples were not spotted in actual attacks, the techniques demonstrate a fundamental shift in how malicious software can operate.

How PromptLock Operates

Unlike traditional polymorphic malware that uses pre-defined algorithms to change its appearance, PromptLock demonstrates what researchers term situational awareness. The malware acts more like a human penetration tester than a static programme. It scouts target systems to determine operating systems, installed endpoint detection and response agents, and valuable data before prompting its internal language model to write customised attack code.

Because the code is generated on demand and never reused, there is no signature for traditional security software to identify. The malware uses the GPT-OSS-20B model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly. These scripts enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption using the SPECK 128-bit encryption algorithm. The ransomware is written in Golang and creates Lua scripts compatible across Windows, Linux, and macOS platforms.

Industry Response

The emergence of AI-native malware has accelerated deployments of autonomous security agents by major vendors. Microsoft evolved its Security Copilot into a full autonomous agent platform, with customers deploying specialised agents that can identify and neutralise malicious emails reportedly 6.5 times faster than human analysts. These agents operate embedded in Defender XDR, Sentinel, Entra, and Purview.

CrowdStrike unveiled its Agentic Security Workforce at Fal.Con 2025 in September, introducing mission-ready agents powered by Charlotte AI that can autonomously isolate compromised hosts and patch vulnerabilities without waiting for human approval. Michael Sentonas, president of CrowdStrike, stated that the vision is for every security analyst to command an agentic security workforce that eliminates time-consuming and repetitive tasks better suited for machines.

SentinelOne launched Purple AI Athena at RSA Conference 2025, featuring in-line agentic auto-investigations that can conduct end-to-end impact analysis of threats and suggest remediation steps before human analysts receive initial alerts. CEO Tomer Weingarten described it as the industry's first true end-to-end agentic AI cybersecurity platform.

The Capabilities Gap

Security experts warn of a growing divide between organisations that can afford enterprise-grade agentic AI and those that cannot. PromptLock-style threats can be deployed by low-skill attackers using malware-as-a-service platforms, whilst advanced defences remain costly. This asymmetry has drawn attention from global regulators and the Cybersecurity and Infrastructure Security Agency, as smaller businesses may find themselves increasingly defenceless against autonomous threats.

VIPRE Security Group, which operates under OpenText, released a report on 14 January 2026 warning that AI-native tools are dramatically lowering entry barriers for inexperienced cybercriminals, increasing danger for small and medium enterprises that lack resources for advanced defences.

Related Developments

PromptLock is not an isolated case. Google's Threat Intelligence Group documented PROMPTFLUX, experimental malware that uses Google's Gemini language model to rewrite its entire source code every hour to evade detection. The malware queries Gemini's API to request specific VBScript obfuscation and evasion techniques for just-in-time self-modification.

In June 2025, Russia-linked APT28 used PROMPTSTEAL against Ukrainian targets, marking Google's first observation of malware querying a language model deployed in live operations. The malware queries an LLM via Hugging Face API to generate commands for execution.

Expert Perspectives

Anton Cherepanov, senior malware researcher at ESET, stated when the malware was first discovered that the emergence of tools like PromptLock highlights a significant shift in the cyber threat landscape. A well-configured AI model is now enough to create complex, self-adapting malware. If properly implemented, such threats could severely complicate detection and make the work of cybersecurity defenders considerably more challenging.

According to Gartner estimates, 40 percent of all enterprise applications will integrate with task-specific AI agents by the end of 2026, up from less than 5 percent in 2025. This massive wave of AI agent deployment provides the force multiplier security teams have desperately needed, but also introduces new risks.

Looking Forward

The cybersecurity community anticipates that 2026 will be a pivotal year for autonomous AI agents in cybersecurity, requiring organisations to balance innovation with robust security governance and controls. The consensus is that the hybrid human-agent security operations centre will become the foundation for modern cybersecurity, where autonomous agents tackle scale problems whilst humans provide alignment, oversight, and guardrails.

The emergence of AI-native threats represents a fundamental shift in the cyber threat landscape, moving from static malware with recognisable signatures to adaptive, situationally aware threats that can generate unique attack code for each operation. Traditional security approaches designed for predictable software systems cannot adequately protect against these dynamic threats, necessitating an evolution toward autonomous defensive systems capable of operating at machine speed.

Published January 15, 2026 at 7:57am

More Recent Episodes