Model risk management (MRM) used to be a relatively stable discipline. Financial institutions primarily dealt with scorecards, capital models, and other statistical tools that were transparent, rarely changed, and could be validated periodically. That era has ended.
The arrival of predictive machine learning and generative AI has overturned this stability. Today’s models retrain frequently, learn from massive and often unstructured datasets, and behave in ways that cannot always be predicted or explained. The traditional playbook, while still valuable as a foundation, is no longer enough to provide assurance. This shift demands both a fundamental rethinking of governance and a radical strengthening of technical capabilities. These themes were central to the joint webinar on effective AI Model Risk Management, hosted by TCS and Yields on July 1, 2025, featuring Marc Taymans (Model Risk and AI Expert), Jos Gheerardyn (CEO and Co-founder Yields), and Andrew Cross (Strategic Risk Advisor).
Complexity and unpredictability as the new baseline
Where complexity used to be an exception, in AI it has become the norm. Neural networks, deep learning systems and large language models (LLMs) cannot be manually tested in their entirety. They are dynamic, opaque, and evolve at a pace that traditional MRM processes cannot keep up with. The focus is therefore shifting away from fully understanding every parameter to ensuring that institutions maintain control in practice. Observability and responsiveness in production environments are becoming more important than dissecting every model component.
This creates new dimensions of risk. Bias, for example, has always been present in areas such as credit models, but AI systems can amplify it in unforeseen ways. Drift is another pressing issue: models that learn continuously are highly sensitive to changes in data or context, meaning that validation can no longer be a yearly exercise. Generative AI has added its own failure mode in the form of hallucinations, outputs that may appear convincing but are entirely false. And finally, opacity: advanced AI models often function as black boxes, making explainability techniques not optional but critical.
From periodic validation to continuous monitoring
Traditionally, validation was an episodic process. A model was approved, documented, and set aside until the next review cycle. With AI, that rhythm is no longer viable. Monitoring must be continuous, with anomalies detected and flagged in real time, and retraining pipelines triggered automatically when thresholds are crossed.
This shift means risk management is no longer only about policy, governance and compliance. It has become an engineering challenge. Building the right monitoring infrastructure, automating tests, and developing observability dashboards are now as vital as the statistical expertise that used to dominate MRM.
Governance in transition
Organizationally, AI often enters financial institutions outside of traditional risk functions. Business lines experiment with AI tools, data science teams push innovations, and leadership seeks to capture opportunities quickly. Without coordination, this creates dual processes: on the one hand extensions of the established model risk framework, and on the other hand new AI governance tracks.
The institutions that manage this best are those that define a smart balance. Common processes such as inventories, risk tiering and documentation are shared across both tracks. Distinct processes, such as ethical assessments or vendor due diligence, can remain separate. Clear accountability not only prevents duplication but also avoids internal competition over “who gets to say yes” to a model.
The third-party challenge
Few organizations build their own LLMs. Most license them from technology providers, while others encounter them indirectly through third-party services that quietly embed AI. This creates two levels of exposure: direct reliance on external models, and hidden dependencies on vendor systems.
Effective governance must therefore extend beyond in-house models. Contracts need to address transparency, performance limitations, and liability. Institutions must integrate the use of third-party AI into their internal frameworks, even if they do not control the underlying models.
Engineering as the foundation
Perhaps the clearest message from the webinar was that AI risk management is, at its core, an engineering discipline. Governance provides the scaffolding, but reliability depends on how models are tested and monitored in practice.
Examples make this concrete. Chatbots used in customer service have already caused legal and reputational damage by inventing policies or licensing schemes. Such failures highlight the importance of scope control: ensuring that systems know what is within or outside their remit before answering a query. Similarly, a Japanese lunar lander crashed when a single sensor anomaly cascaded through the system, underscoring the need for robust system-level validation. These incidents demonstrate why stress testing, adversarial testing, and automated monitoring are not optional but essential.
Evolving skills and regulation
The people side of risk management is also changing. Teams that were once dominated by statisticians now need engineers, programmers, data specialists, and legal experts who understand privacy and compliance. Ethics and bias expertise is no longer a luxury but a necessity. AI risk touches a far broader set of stakeholders than traditional models ever did.
Regulators, too, are adapting. Standards such as SR 11-7 remain important, but they were designed for an earlier era. At the same time, broader frameworks like the EU AI Act introduce new expectations that cut across industries. Institutions cannot simply wait for detailed rules to be handed down; they must implement strong internal practices now, so that compliance becomes a natural by-product rather than a rushed exercise.
Toward a new playbook
AI is not just another category of models. It redefines the assumptions of model risk management itself. The journey ahead involves moving from static oversight to dynamic control, from periodic validation to continuous observability, and from narrow statistical expertise to interdisciplinary teams.
Effective AI model risk management is not merely a compliance exercise. It is becoming a strategic capability: one that protects institutions against novel risks while enabling them to unlock the full potential of artificial intelligence. Couldn’t follow the webinar on Effective AI Model Risk Management live? No worries, we’ve made the full recording available for you.
Curious how your MRM setup stacks up? Book a demo or reach out, we’re happy to help.




