A new report from CrowdStrike, the "2025 State of Ransomware Survey," reveals a significant confidence crisis among security leaders. A commanding 76% of organizations surveyed believe they are losing the race against adversaries who are leveraging Artificial Intelligence (AI) to power their attacks. The survey indicates that threat actors are using AI to accelerate the entire ransomware lifecycle, collapsing the response window for defenders. A strong majority (85%) of respondents agree that legacy security tools are becoming obsolete against these advanced threats, and 48% now consider AI-automated attacks to be their top ransomware concern. The report also highlights the ineffectiveness of paying ransoms, as 83% of organizations that paid were attacked again.
The survey's findings point to a paradigm shift in the ransomware landscape, driven by the weaponization of AI. Adversaries are no longer just using AI for isolated tasks; they are integrating it across the entire attack chain. According to the report, this includes:
The consequence is that nearly half of organizations fear they cannot detect or respond quickly enough to an attack. This fear is substantiated by the finding that fewer than 25% of victims recover within 24 hours, and a similar number suffer major business disruption or data loss.
The weaponization of AI impacts several phases of the MITRE ATT&CK framework:
T1596 - Search Open Technical Databases).T1566 - Phishing more effective.T1027 - Obfuscated Files or Information) that changes with each infection, making signature-based detection useless.The primary impact highlighted by the report is the erosion of defensive capabilities. Legacy, signature-based tools are proving ineffective, and security teams are struggling to keep up with the speed of automated attacks. This leads to longer dwell times, more successful ransomware deployments, and greater business disruption. The survey's finding that paying a ransom is not a solution (with 83% of payers being re-targeted and 93% having their data stolen anyway) reinforces the need for a focus on prevention and resilience rather than reaction. The rise of AI-powered attacks will force organizations to invest in next-generation, AI-driven security platforms to fight fire with fire.
Detecting AI-powered attacks requires a shift from static IOCs to behavioral indicators:
| Type | Value | Description |
|---|---|---|
network_traffic_pattern |
Unusually fast lateral movement |
Monitor for an attacker moving between systems at a speed that is too fast for a human operator. |
command_line_pattern |
Rapid, sequential execution of discovery commands |
An automated script may run ipconfig, netstat, whoami, etc., across multiple hosts in seconds. |
log_source |
Authentication logs |
Monitor for login attempts that test multiple credentials across many systems in a pattern that suggests automated logic rather than human brute-forcing. |
D3-UBA: User Behavior Analysis and process monitoring to establish a baseline of normal activity and detect subtle anomalies that indicate a breach.D3-FR: File Restoration.Ransomware breakout time shrinks to 18 minutes due to AI/automation, challenging human-speed response.
Use AI/ML-based EDR/XDR platforms to detect and block malicious behaviors in real-time, regardless of the specific malware signature.
Mapped D3FEND Techniques:
Continuously train users on how to spot sophisticated, AI-generated phishing emails and deepfakes.
Maintain immutable backups and a tested recovery plan to ensure resilience against successful ransomware attacks.
Implement phishing-resistant MFA to protect identities, which are a primary target of AI-enhanced social engineering.
Mapped D3FEND Techniques:
To counter AI-powered attacks that mimic legitimate activity, organizations must fight fire with fire. Implement a security solution that leverages machine learning to perform User Behavior Analysis. This involves baselining the normal activity of every user and entity (e.g., typical login times, resources accessed, data transfer volumes). The system can then detect subtle anomalies that indicate a compromise, such as a user account suddenly accessing a new server for the first time or performing actions at an unusual speed. This behavioral approach is crucial for detecting AI-driven attacks that bypass traditional signature-based tools.
As AI makes phishing more effective, identity becomes the primary battleground. Deploy an Identity Threat Detection and Response (ITDR) solution to provide real-time visibility into authentication traffic and identity systems like Active Directory. This allows for the detection of attacks like credential spraying, Kerberoasting, and lateral movement via compromised accounts at machine speed. An ITDR tool can automatically respond to a threat, for example, by forcing re-authentication or isolating an account that is exhibiting behavior consistent with an AI-driven attack, thus containing the threat before it can escalate.
Deploy deception technology to create a hostile environment for automated, AI-driven attackers. Seed the network with decoy systems, credentials, and data that are indistinguishable from real assets. An AI-powered attacker, focused on speed and automation, is likely to interact with these decoys during its reconnaissance and lateral movement phases. Any interaction with a decoy asset is a high-fidelity indicator of a breach. This can derail the automated attack chain and provide security teams with early warning and valuable threat intelligence about the attacker's TTPs, all while the real assets remain safe.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats