Webinar Recap: Insights from Model Risk Management Expert Agus Sudjianto
In our recent webinar, we had the privilege of hosting Agus Sudjianto. Agus has been instrumental in creating and leading one of the most sophisticated and active model risk management teams at Wells Fargo. His extensive experience spans decades, from his early days at Ford to his role at Wells Fargo. Here’s a detailed recap of the insightful conversation.
From Machine Learning PhD to Model Risk Management Pioneer
Agus Sudjianto’s journey began in the 1990s when he earned his PhD in machine learning. With his expertise in neural networks and signal processing he started at Ford where he transitioned from machine learning to engineering, focusing on engine design. Although it wasn’t explicitly called MRM, building sound models and minimizing the risk of analytics was very important at Ford, especially because the cost of error was very high. In 2013, Agus joined Wells Fargo, returning to the US from the UK, where he had been with Lloyds. This period coincided with the early implementation of SR 11-7, the US regulatory guidance for model risk management. His strategic vision transformed Wells Fargo’s model risk management from a compliance requirement into a strategic asset.
What is The Role of R&D in Model Validation
A key aspect of the strategic success of the team at Wells Fargo was the realization that the MRM team is the natural place to host the bank’s RnD activity. An important aspect of a model validation is the creation of a challenger model. This is a model that the validator creates to assess whether a different approach would yield similar results. Since challenger models are used in a non-business critical fashion, it is a great place to experiment with new techniques. By collaborating with academic experts and focusing on the latest research, the team stays at the forefront of model (validation) methodologies. Moreover, only after the validation team has built up significant expertise in a new technique, the approach graduates and can be used by the first line. In this way, Agus’ team had been working with e.g. BERT long before the hype on LLMs.
PiML: the Python interpretable machine learning toolbox
The webinar introduces Python Interpretable Machine Learning (PiML), a tool designed to enhance model validation, especially in the context of AI. PiML addresses both conceptual soundness and outcome analysis, two critical aspects of a validation approach.
- Conceptual Soundness: Ensures data quality, suitability, and proper input control, with a focus on explainability and benchmarking against sound models.
- Outcome Analysis: Identifies model weaknesses, measures uncertainty, ensures robustness, and assesses performance under input drift. These practices help in understanding and managing model risk effectively.
PiML incorporates essential elements such as data quality, input control, and benchmark modeling with inherently interpretable machine learning models. These features ensure that models are not only accurate but also robust and reliable under various conditions. During the webinar, Agus also mentioned that more features will be released related to addressing the challenges of validating LLMs (Large Language Models).
Conclusion
During the webinar, Agus provides a comprehensive overview of innovations in model risk management. By integrating advanced R&D functions and developing tools like Python Interpretable Machine Learning, the field advances towards more robust and reliable model validation practices. These efforts underscore the strategic importance of model risk management in adapting to future technological advancements and regulatory requirements.
For more insights on model risk management and to stay updated with the latest developments, subscribe to our newsletter and follow us on social media.