AI Governance
The EU AI Act is taking shape. As its provisions gradually enter into force, with most obligations for high-risk AI systems applying from August 2, 2026, financial institutions are starting to ask a crucial question:
What does AI compliance mean in a sector already governed by DORA, GDPR, and strict model risk management rules?
That question was at the heart of the recent ACPR “Réunion de place” held on September 17. One message stood out clearly:
“AI governance in finance doesn’t start from zero. It builds on an existing, mature regulatory framework.”
What’s changing now is the level of proof expected: institutions must be able to demonstrate traceability, transparency, and auditability, even for general-purpose AI systems.
2. Auditability moves to the center
The ACPR underlined a growing priority: developing specific audit methodologies for AI. To support this, it has created the Directorate for Innovation, Data and Technological Risks (DIDRIT) and the Technological Risk Surveillance Service (SRT), both dedicated to AI supervision and digital resilience.
Auditability is becoming a real operational challenge, one that requires new tools, processes, and expertise to understand and explain how AI systems behave.
Turning compliance into confidence
At Yields, we see these developments as an opportunity rather than an obstacle. The principles of sound risk management that have guided us since the beginning are now becoming the foundation of responsible AI governance.
Based in Belgium and active across Europe, we’re already helping financial institutions in France and beyond bridge the gap between regulatory ambition and operational reality, giving them the means to demonstrate reliability and trust in their AI systems.
In the end, this evolution isn’t just about compliance. It’s about making AI accountable, transparent, and aligned with the principles that already define the strength of the financial sector.
Author

Sébastien Viguié is the co-founder of Yields, the first FinTech platform leveraging AI for enterprise-scale model testing and validation. A strong advocate of model risk governance and strategy, he focuses on helping financial institutions embed trust, transparency, and compliance into their AI and model lifecycle. Previously CISO at Yields, Sébastien gained hands-on experience reconciling cybersecurity principles with model risk management and AI governance, a perspective he now extends to emerging regulatory frameworks such as ISO, NIST, and the EU AI Act.
Before founding Yields, he worked as a front-office quantitative analyst at BNP Paribas, where he developed a deep understanding of model development and validation in fast-paced trading environments, expertise that continues to inform his pragmatic approach to responsible AI and risk management today.




