How Banks Can Take Control of AI Risk
Artificial Intelligence is changing how banks work, helping with credit decisions, fraud detection, risk modelling, and even customer service. But with great power comes great responsibility.
Every time a bank relies on a model to make a decision, there’s a chance that model could be wrong, outdated, or misused. That’s called model risk. And if your bank is using AI, you’re already exposed to it.

There’s no such thing as a perfect model
Every model is a simplified version of the real world. It’s based on data, assumptions, and rules. But the world keeps changing, people change, markets shift, regulations evolve. So even a well-built model will eventually drift out of sync. That’s why model risk isn’t a one-time concern, it’s ongoing.
“There is no perfect model. There never will be.” – Jos Gheerardyn, CEO of Yields This reflects a principle first articulated by statistician George Box: ‘All models are wrong, but some are useful.’ In other words, models will always be simplifications of reality, what matters is how responsibly we manage and monitor them.
“There is no perfect model. There never will be.”
– Jos Gheerardyn, CEO of Yields
Where model risk shows up
Model risk is everywhere in modern banking. For example:
- Credit scoring models that approve loans
- AI tools that detect suspicious transactions
- Systems that price complex financial products
- Chatbots that answer customer questions
The danger often isn’t a broken model (e.g. a model that contains a methodological issue or an implementation error), but a model used in the wrong way. A credit model designed for mortgages, for instance, shouldn’t be used for small business lending without proper testing.
AI makes the risk bigger
With traditional models, banks had time to review and test things manually. But AI moves faster, and it’s harder to explain how these models work, especially with technologies like deep learning or generative AI.
Two things are happening at once:
- The models are more complex (harder to understand, harder to monitor)
- The number of models is growing fast (AI is built into more and more tools)
This means risk is growing in both depth and scale. Manual review just can’t keep up.
What banks need instead
To stay in control, banks need to move from one-time validation to continuous governance. That includes:
- Real-time monitoring: Is the model still working as expected?
- Context checks: Is it being used for the right purpose?
- Scenario testing: What happens in extreme or unusual cases?
- Alerts: Does the system flag unusual behaviour early?
This is not just about policy. It’s about having the right technology and systems in place.
Why this needs to be built in
Model governance shouldn’t be something that happens only at the end. It needs to be part of the full model lifecycle, from design and development to deployment and monitoring.
This means:
- Working across teams (data science, IT, risk, compliance)
- Using tools that help automate validation and tracking
- Making AI risk understandable and visible across the business
Regulation is evolving, but the responsibility is now
Rules like SR11-7 (in the US) or SS1/23 (in the UK) provide guidance on managing model risk. But many were written before AI reached its current level of complexity. Still, the principles remain the same: test thoroughly, document clearly, stay in control. Even if regulation is catching up, banks can’t afford to wait. The risk is already here.
Culture is just as important as tools
Managing model risk isn’t just a technical challenge, it’s a people challenge too. Often, model builders (like data scientists) and model users (like execs or risk managers) don’t speak the same language. When they don’t work together, risk increases. The solution? Break down silos. Get cross-functional teams involved. Make AI governance a shared responsibility.
Done right, it’s a competitive advantage
AI models that are well-governed are safer, faster to deploy, and more reliable. Poor governance leads to delays, compliance issues, and failed projects. Strong governance builds trust, with regulators, customers, and your own teams.
Listen to our Podcast with Monocle
This article is based on insights from the Monocle Banking Podcast featuring Jos Gheerardyn, CEO of Yields. Want the full conversation on model risk, AI governance, and how banks can turn control into advantage?