From principles to practice: bringing AI governance to life in finance

AI Governance

The EU AI Act is taking shape. As its provisions gradually enter into force, with most obligations for high-risk AI systems applying from August 2, 2026, financial institutions are starting to ask a crucial question:

What does AI compliance mean in a sector already governed by DORA, GDPR, and strict model risk management rules?

That question was at the heart of the recent ACPR “Réunion de place” held on September 17. One message stood out clearly:

“AI governance in finance doesn’t start from zero. It builds on an existing, mature regulatory framework.”

What’s changing now is the level of proof expected: institutions must be able to demonstrate traceability, transparency, and auditability, even for general-purpose AI systems.

1. Convergence matters

European authorities are working to map the overlaps between the AI Act and financial regulations. The aim is to make supervision more coherent, not more complex.

The standardization work led by CEN-CENELEC should bring much-needed clarity on what compliance will look like in practice. For financial institutions, this means aligning risk, compliance, and data teams under a shared governance framework, one that connects regulatory and technical perspectives.

2. Auditability moves to the center

The ACPR underlined a growing priority: developing specific audit methodologies for AI. To support this, it has created the Directorate for Innovation, Data and Technological Risks (DIDRIT) and the Technological Risk Surveillance Service (SRT), both dedicated to AI supervision and digital resilience.

Auditability is becoming a real operational challenge, one that requires new tools, processes, and expertise to understand and explain how AI systems behave.

3. Fairness and explainability take on real meaning

The first ACPR workshops made it clear that fairness and bias management remain difficult in practice. Defining appropriate metrics, identifying sensitive variables, and ensuring transparency are still works in progress.

Supervisors now expect more than good intentions, they expect measurable and defensible approaches to fairness and explainability.

Turning compliance into confidence

At Yields, we see these developments as an opportunity rather than an obstacle. The principles of sound risk management that have guided us since the beginning are now becoming the foundation of responsible AI governance.

Based in Belgium and active across Europe, we’re already helping financial institutions in France and beyond bridge the gap between regulatory ambition and operational reality, giving them the means to demonstrate reliability and trust in their AI systems.

In the end, this evolution isn’t just about compliance. It’s about making AI accountable, transparent, and aligned with the principles that already define the strength of the financial sector.

Subscribe to the Yields Newsletter

Stay ahead with expert articles on MRM and AI risk topics, in-depth whitepapers, and Yields company updates.


Author

AI Governance

Sébastien Viguié is the co-founder of Yields, the first FinTech platform leveraging AI for enterprise-scale model testing and validation. A strong advocate of model risk governance and strategy, he focuses on helping financial institutions embed trust, transparency, and compliance into their AI and model lifecycle. Previously CISO at Yields, Sébastien gained hands-on experience reconciling cybersecurity principles with model risk management and AI governance, a perspective he now extends to emerging regulatory frameworks such as ISO, NIST, and the EU AI Act.

Before founding Yields, he worked as a front-office quantitative analyst at BNP Paribas, where he developed a deep understanding of model development and validation in fast-paced trading environments, expertise that continues to inform his pragmatic approach to responsible AI and risk management today.