The G7 Cyber Expert Group (CEG), which advises G7 Finance Ministers and Central Bank Governors, has published a statement addressing the significant cybersecurity challenges and opportunities presented by the rapid adoption of Artificial Intelligence (AI) in the financial sector. The statement, released on October 6, 2025, by the U.S. Department of the Treasury, emphasizes that while AI can enhance cyber defenses, it also creates new attack vectors and amplifies existing risks. The CEG calls for a proactive and collaborative approach from financial institutions, regulators, and central banks to establish strong governance and risk management frameworks to ensure financial stability in the age of AI.
The G7 CEG statement is not a formal regulation but serves as high-level guidance and a set of key considerations for the global financial ecosystem. It aims to foster international consensus on managing AI-related cyber risks. The core principles outlined in the statement revolve around:
The statement is directed at a wide range of stakeholders within the G7 nations (Canada, France, Germany, Italy, Japan, UK, US) and the broader global financial system:
While not legally binding, the statement signals the direction of future regulation and supervisory expectations. Financial institutions will be expected to demonstrate that they are proactively managing AI-related risks. This includes:
The proliferation of AI in finance presents both significant opportunities and systemic risks:
Financial institutions should take the following steps in response to the G7 statement:
Train employees to be skeptical of hyper-realistic phishing attempts generated by AI.
Implement robust model risk management and governance frameworks for all AI systems.
Mapped D3FEND Techniques:
Test AI models in a secure, sandboxed environment to identify vulnerabilities before deployment.
Mapped D3FEND Techniques:
In line with the G7's recommendations, financial institutions must adopt adversarial machine learning testing as a standard part of their AI model validation process. This involves performing dynamic analysis by setting up a 'red team' for AI, where security experts actively try to break the models before they are deployed. This includes testing for model evasion (crafting inputs to fool a fraud detection system), data poisoning (injecting bad data to corrupt learning), and model inversion (extracting sensitive training data). By simulating how a real adversary would attack an AI system, firms can identify and remediate vulnerabilities in a controlled environment, ensuring the model is resilient against the new threats highlighted by the G7.
To address the governance and risk management gaps identified by the G7, financial firms must implement rigorous application configuration hardening for their entire MLOps pipeline. This means establishing a formal AI/ML governance framework that defines security requirements at each stage of the model lifecycle. Key hardening steps include: securing data ingestion pipelines to prevent data poisoning, implementing strict access controls for model training environments, ensuring all API endpoints serving model inferences require strong authentication, and logging all prediction requests for anomaly detection. This creates an auditable and defensible posture, moving beyond just model accuracy to ensure the entire system is secure and resilient by design.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats