A research report from AXIS Capital has exposed a critical disconnect between executive leadership and security leadership on the topic of Artificial Intelligence (AI). The survey of 500 leaders in the U.S. and U.K. found that CEOs are significantly more optimistic about AI's ability to strengthen cybersecurity, while CISOs are more cautious, focusing on the new and unprecedented risks AI creates. This perception gap poses a strategic risk to organizations, potentially leading to inadequate resource allocation for securing AI systems and a false sense of security at the board level.
The survey, published January 20, 2026, provides several key data points illustrating the divide:
This issue affects all organizations across every industry that are adopting or planning to adopt AI technologies. The divide is particularly acute in sectors that are early adopters of AI for core business functions. The findings are relevant for boards of directors, executive leadership teams, and security departments in the U.S. and the U.K.
While not a formal regulation, the survey highlights emerging risk areas that will likely become subject to future compliance and governance frameworks. Key risks and concerns voiced by CISOs include:
The primary impact of this CEO-CISO disconnect is strategic misalignment. If CEOs push for rapid AI adoption without fully appreciating the risks articulated by CISOs, organizations may deploy insecure AI systems, underfund necessary security controls, and create new, unmonitored attack surfaces. The plan to reduce security headcount based on anticipated AI productivity is particularly alarming, as human expertise is more critical than ever to manage the complexity and novelty of AI-related threats. This could lead to a net decrease in security posture despite increased technology spending.
While there are no direct penalties for a perception gap, the consequences will manifest as increased successful cyberattacks. Regulatory bodies are beginning to focus on AI security, and a failure to demonstrate due diligence in securing AI systems could lead to significant fines under existing data protection laws (e.g., GDPR) if an AI-related breach occurs.
To bridge this gap, organizations should take the following steps:

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats