A landmark report from ISACA, the "2026 ISACA Tech Trends and Priorities" report, indicates a significant shift in the perceived threat landscape. For the first time, a survey of 3,000 IT and cybersecurity professionals has identified AI-driven social engineering as the single greatest cyber threat anticipated for 2026. A 63% majority of respondents cited this threat, placing it ahead of perennial concerns like ransomware (54%) and supply chain attacks (35%). The report, published on October 20, 2025, also uncovered a concerning 'preparedness gap': despite the recognized risk, a mere 13% of professionals feel their organizations are 'very prepared' to manage generative AI threats. This data points to an urgent need for organizations to evolve their security awareness programs, technical controls, and incident response plans to counter the next generation of highly personalized and convincing social engineering attacks.
AI-driven social engineering represents a quantum leap in the sophistication of phishing and other manipulation-based attacks. Instead of generic, poorly worded emails, threat actors can now leverage generative AI to:
T1566 - Phishing.The ISACA report provides key data points on this emerging challenge:
The widespread adoption of AI by threat actors will likely lead to:
Defending against AI-powered threats requires a shift in strategy:
D3-UBA: User Behavior Analysis.M1017 - User Training.M1054 - Software Configuration.UAE issues public warning on AI deepfakes, highlighting legal risks and public awareness.
Evolve security awareness training to focus on verifying unusual requests and identifying the hallmarks of AI-driven attacks, rather than just spotting typos.
Establish and enforce strict business processes for sensitive transactions that require out-of-band verification, making social engineering less effective.
Mapped D3FEND Techniques:
Utilize modern email security solutions that employ AI/ML to detect anomalies in communication patterns, sender reputation, and intent.
Mapped D3FEND Techniques:
To counter AI-driven social engineering, organizations must shift from content analysis to behavior analysis. Implementing a User Behavior Analysis (UBA) solution is key. Such a system would baseline normal communication patterns and behaviors for each employee. For example, it would learn that the CEO never emails the finance department to request an urgent wire transfer to a new, unknown vendor. When an AI-generated deepfake email or message makes such a request, the UBA system would flag it as a high-risk anomaly based on the deviation from established behavior, even if the language is perfect. This provides a critical detection layer that is resilient to the increasing sophistication of AI lures.
Application Configuration Hardening, in this context, refers to hardening business processes against manipulation. Organizations must establish and enforce rigid, non-negotiable procedures for sensitive actions. For example, any request for a wire transfer over a certain threshold must require dual approval and out-of-band verification (e.g., a phone call to a known, trusted number). This process-level hardening creates a human firewall that an AI-driven social engineering attack cannot bypass, regardless of how convincing the initial email or message is. The process itself becomes the security control, rendering the social engineering attempt ineffective.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats