AI Risk Disclosures Skyrocket Among S&P 500, Cybersecurity a Top Concern

Report: Over 70% of S&P 500 Companies Now Cite AI as a Material Risk, Up from 12% in 2023

INFORMATIONAL
October 7, 2025
4m read
Policy and ComplianceCloud Security

Related Entities

Organizations

The Conference Board

Products & Tech

Other

PwC

Full Report

Executive Summary

A report published on October 7, 2025, by The Conference Board indicates a monumental shift in how major corporations perceive the risks associated with Artificial Intelligence. The study found that over 70% of companies in the S&P 500 index now include AI-related risks in their public disclosures, a nearly six-fold increase from 12% in 2023. This rapid rise underscores the speed at which AI has been integrated into core business processes and the growing awareness in boardrooms of its potential downsides. The most frequently mentioned concern is reputational risk (38%), while cybersecurity risk is the second-most cited (20%), reflecting concerns about an expanded attack surface and new vulnerabilities. The data suggests that while enterprises are eagerly adopting AI, the governance structures required to manage its risks are lagging behind.


Regulatory Details

While not a specific regulation, this trend of disclosures is driven by the obligation of publicly traded companies to inform investors of material risks that could impact financial performance. The surge in AI-related disclosures reflects a consensus among corporate legal and risk departments that AI now meets this materiality threshold. The key risk categories being disclosed are:

  1. Reputational Risk (38%): Fear of brand damage resulting from AI failures, such as biased algorithms, privacy breaches, or poor customer interactions with AI-driven tools.
  2. Cybersecurity Risk (20%): Concerns that AI introduces new security vulnerabilities. This includes the risk of data poisoning, model theft, and the use of insecure third-party AI applications that could expose corporate data.
  3. Legal and Regulatory Risk: Uncertainty surrounding the evolving global regulatory landscape for AI. New laws could impose significant compliance costs or restrict the use of certain AI technologies.

Affected Organizations

This trend affects virtually all publicly traded companies, especially those in the S&P 500 who are leaders in technology adoption. The pressure to innovate with AI is now balanced against the legal and financial duty to disclose the associated risks. Industries most affected include:

  • Technology
  • Finance
  • Healthcare
  • Retail

Compliance Requirements

There are no explicit compliance requirements detailed in the report, but the trend itself implies a new de facto standard for corporate governance. Boards of directors and C-suite executives are now expected to:

  • Oversee AI Strategy and Risk: Formally integrate AI into the board's oversight responsibilities. A separate PwC survey notes that only 35% of boards have done so.
  • Develop AI Governance Frameworks: Establish clear policies for the ethical development, procurement, and deployment of AI systems.
  • Manage Third-Party AI Risk: Implement vendor risk management processes specifically for AI providers, scrutinizing their security practices, data handling policies, and model integrity.

Impact Assessment

The growing acknowledgment of AI risk has several business and operational impacts:

  • Increased Scrutiny: Boards, investors, and regulators will now place greater scrutiny on companies' AI initiatives, demanding more transparency and accountability.
  • Need for New Expertise: Companies need to hire or develop talent with expertise in AI security, ethics, and governance, a skill set that is currently in short supply.
  • Slower Adoption Cycles: The need for more rigorous vetting and governance may slow down the previously breakneck pace of AI adoption in some enterprises, as they take a more cautious approach.

Compliance Guidance

For organizations navigating this new landscape, the focus should be on building a robust AI governance program.

  1. Establish an AI Governance Committee: Create a cross-functional team including legal, compliance, IT, security, and business leaders to oversee all AI projects.
  2. Create an AI Use Policy: Define acceptable and unacceptable uses of AI within the organization. This should explicitly address the use of public generative AI tools with corporate data.
  3. Implement an AI Risk Assessment Framework: Develop a process to assess the risks of any new AI project or third-party tool before it is deployed. This should include security, privacy, ethical, and reputational risk factors.
  4. Focus on Secure AI Development: For companies building their own AI, adopt a Secure AI Development Lifecycle (SAIDL) to build security in from the start, addressing risks like data poisoning and model evasion.

Timeline of Events

1
October 7, 2025
The Conference Board releases its report on AI risk disclosures among S&P 500 companies.
2
October 7, 2025
This article was published

Sources & References

Public disclosures of AI risk surge among S&P 500 companies
Cybersecurity Dive (cybersecuritydive.com) October 7, 2025
Layoffs, reassignments further deplete CISA
Cybersecurity Dive (cybersecuritydive.com) October 7, 2025

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

AIArtificial IntelligenceRisk ManagementCorporate GovernanceCompliance

📢 Share This Article

Help others stay informed about cybersecurity threats

Continue Reading