The G7 Cyber Expert Group (CEG), an advisory body to G7 Finance Ministers and Central Bank Governors, released a statement on October 6, 2025, addressing the profound impact of Artificial Intelligence (AI) on cybersecurity within the global financial system. The document frames AI as a double-edged sword, offering powerful new defensive capabilities while simultaneously arming threat actors with tools to increase the speed and scale of their attacks. The statement is not a new regulation but a call to action, urging financial institutions and authorities to proactively manage emerging AI-related risks. Key concerns highlighted include accelerated exploitation, vendor concentration risk, and internal capability gaps.
The statement does not introduce new binding regulations but establishes a framework of shared understanding and encourages voluntary adoption of best practices. It aims to foster international cooperation and a common approach to managing AI's cybersecurity implications. The guidance is directed at financial institutions, financial authorities, and the broader ecosystem of AI developers and service providers.
The guidance applies broadly to the entire financial sector within G7 nations (Canada, France, Germany, Italy, Japan, the UK, and the US) and has implications for the global financial system. This includes:
While not mandatory, the CEG outlines seven key considerations that financial institutions are strongly encouraged to adopt:
There is no formal implementation timeline or deadline, as the statement serves as strategic guidance. However, the G7 CEG urges immediate consideration and action from financial institutions and authorities. It is expected that national regulators within the G7 will begin incorporating these principles into their supervisory frameworks and future regulatory updates over the next 12-24 months.
Adopting these principles will require significant investment in technology, talent, and process re-engineering. Key impacts include:
As the statement is non-binding, there are no direct penalties for non-compliance. However, financial authorities within G7 nations are likely to use these principles as a benchmark during cybersecurity examinations and audits. Firms that fail to demonstrate adequate management of AI-related risks could face supervisory actions, including findings, recommendations, and potentially increased capital requirements or fines under existing cybersecurity and operational resilience regulations.
NYDFS issues guidance on financial firms' accountability for third-party cyber risks, emphasizing board oversight.

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.
Help others stay informed about cybersecurity threats