G7 Cyber Experts Issue Statement on Managing AI Risks in Financial Sector

G7 Cyber Expert Group Publishes Statement on Managing Artificial Intelligence Risks in the Financial Sector

INFORMATIONAL
October 6, 2025
4m read
Policy and ComplianceRegulatoryCloud Security

Related Entities

Organizations

G7 Cyber Expert Group (CEG)U.S. Department of the Treasury

Products & Tech

Full Report

Executive Summary

The G7 Cyber Expert Group (CEG), which advises G7 Finance Ministers and Central Bank Governors, has published a statement addressing the significant cybersecurity challenges and opportunities presented by the rapid adoption of Artificial Intelligence (AI) in the financial sector. The statement, released on October 6, 2025, by the U.S. Department of the Treasury, emphasizes that while AI can enhance cyber defenses, it also creates new attack vectors and amplifies existing risks. The CEG calls for a proactive and collaborative approach from financial institutions, regulators, and central banks to establish strong governance and risk management frameworks to ensure financial stability in the age of AI.


Regulatory Details

The G7 CEG statement is not a formal regulation but serves as high-level guidance and a set of key considerations for the global financial ecosystem. It aims to foster international consensus on managing AI-related cyber risks. The core principles outlined in the statement revolve around:

  • Dual-Use Nature of AI: Acknowledging that AI is both a powerful tool for defense (e.g., enhanced threat detection, automated response) and a weapon for offense (e.g., AI-powered malware, advanced social engineering).
  • Novel AI-Specific Risks: Highlighting new vulnerabilities unique to AI systems, such as:
    • Data Poisoning: Maliciously manipulating the training data of an AI model to corrupt its outputs.
    • Model Evasion: Attackers crafting inputs that are misclassified by AI security models, allowing them to bypass defenses.
    • Confidentiality Attacks: Extracting sensitive information from an AI model's training data.
  • Need for Governance: Calling on financial firms to implement robust governance and risk management frameworks that specifically address the entire lifecycle of AI systems, from development to deployment and decommissioning.
  • Key Principles for AI Systems: Emphasizing the need for security, resilience, fairness, and transparency in the design and operation of AI models used in finance.

Affected Organizations

The statement is directed at a wide range of stakeholders within the G7 nations (Canada, France, Germany, Italy, Japan, UK, US) and the broader global financial system:

  • International Banks and Financial Institutions
  • Financial Technology (FinTech) companies
  • Central Banks and Monetary Authorities
  • Financial Regulators and Supervisory Bodies
  • Third-party service providers offering AI solutions to the financial sector

Compliance Requirements

While not legally binding, the statement signals the direction of future regulation and supervisory expectations. Financial institutions will be expected to demonstrate that they are proactively managing AI-related risks. This includes:

  • AI Governance Framework: Establishing clear policies, roles, and responsibilities for the use of AI.
  • Model Risk Management: Extending existing model risk management frameworks to cover AI/ML models, including validation, testing for bias, and security hardening.
  • Secure AI Development Lifecycle: Integrating security practices into the development and deployment of AI systems (e.g., MLOps).
  • Third-Party Risk Management: Scrutinizing the security of AI products and services procured from third-party vendors.

Impact Assessment

The proliferation of AI in finance presents both significant opportunities and systemic risks:

  • Positive Impact: AI can dramatically improve fraud detection, anti-money laundering (AML) efforts, and cybersecurity threat intelligence, making the financial system more secure.
  • Negative Impact / Risks:
    • Automated Cyberattacks: Adversaries can use AI to automate vulnerability discovery and exploitation at a scale and speed that human defenders cannot match.
    • Hyper-Realistic Phishing: AI-generated content (deepfakes, personalized text) can make social engineering attacks far more convincing and difficult to detect.
    • Systemic Risk: A vulnerability in a widely used AI model or platform could have cascading effects across the entire financial system, similar to a software supply chain vulnerability.
    • 'Black Box' Problem: The lack of transparency in some complex AI models can make it difficult to understand, audit, and secure their decision-making processes.

Compliance Guidance

Financial institutions should take the following steps in response to the G7 statement:

  1. Create an AI Risk Inventory: Identify and document all instances of AI use within the organization, from chatbots to complex trading algorithms.
  2. Establish a Cross-Functional AI Governance Body: Bring together experts from risk, compliance, IT, security, and business units to oversee the firm's AI strategy.
  3. Invest in AI Security Expertise: Hire or train staff with skills in adversarial machine learning and AI security to red-team internal models and assess vendor products.
  4. Promote a Culture of Transparency: Require that all AI projects maintain clear documentation on data sources, model architecture, and performance metrics to ensure auditability and accountability.

Timeline of Events

1
October 6, 2025
The G7 Cyber Expert Group publishes its statement on AI risks in the financial sector.
2
October 6, 2025
This article was published

MITRE ATT&CK Mitigations

Train employees to be skeptical of hyper-realistic phishing attempts generated by AI.

Implement robust model risk management and governance frameworks for all AI systems.

Mapped D3FEND Techniques:

Test AI models in a secure, sandboxed environment to identify vulnerabilities before deployment.

Mapped D3FEND Techniques:

D3FEND Defensive Countermeasures

In line with the G7's recommendations, financial institutions must adopt adversarial machine learning testing as a standard part of their AI model validation process. This involves performing dynamic analysis by setting up a 'red team' for AI, where security experts actively try to break the models before they are deployed. This includes testing for model evasion (crafting inputs to fool a fraud detection system), data poisoning (injecting bad data to corrupt learning), and model inversion (extracting sensitive training data). By simulating how a real adversary would attack an AI system, firms can identify and remediate vulnerabilities in a controlled environment, ensuring the model is resilient against the new threats highlighted by the G7.

To address the governance and risk management gaps identified by the G7, financial firms must implement rigorous application configuration hardening for their entire MLOps pipeline. This means establishing a formal AI/ML governance framework that defines security requirements at each stage of the model lifecycle. Key hardening steps include: securing data ingestion pipelines to prevent data poisoning, implementing strict access controls for model training environments, ensuring all API endpoints serving model inferences require strong authentication, and logging all prediction requests for anomaly detection. This creates an auditable and defensible posture, moving beyond just model accuracy to ensure the entire system is secure and resilient by design.

Sources & References

Financial Regulation Weekly Bulletin - 9 October 2025
Slaughter and May (slaughterandmay.com) October 6, 2025
Statement on Artificial Intelligence and Cyber Security from the G7 Cyber Expert Group
U.S. Department of the Treasury (home.treasury.gov) October 6, 2025

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

AIArtificial IntelligenceG7FinanceCyber RiskPolicy

📢 Share This Article

Help others stay informed about cybersecurity threats

Continue Reading