Checkbox Compliance

Why checkbox compliance isn’t enough to mitigate AI Model Risk: A case for MRM-based AI Governance

As the deployment of artificial intelligence (AI) accelerates across industries, the need for robust governance and compliance frameworks has become crucial. Many AI governance solutions today offer what we refer to as “checkbox compliance,” where organizations assess their AI models’ adherence to regulatory standards, such as the EU AI Act, by filling out checklists that calculate a percentage of compliance. While this approach may be convenient, it falls short of addressing the core purpose of AI regulation: mitigating model risk.

The limitations of checkbox compliance

The checkbox approach is attractive due to its simplicity. Like cybersecurity governance platforms (e.g., Vanta), these solutions give organizations a straightforward way to assess how much of their AI system complies with regulatory frameworks. However, checkbox compliance has several major drawbacks when applied to AI systems:

  • Superficial risk mitigation: Compliance checklists focus on legal requirements, but these requirements may not fully capture the specific, nuanced risks posed by AI models in practice. AI risks, such as bias, fairness, robustness, or security, are often highly context-dependent and require a deeper technical understanding to mitigate. Merely complying with regulatory standards does not necessarily translate into safer, more reliable AI systems.
  • False sense of security: When organizations rely on percentages of compliance, they may believe their AI systems are safe when in fact, the most critical risks may not have been addressed. This false sense of security can lead to complacency, with unchecked vulnerabilities in the models potentially resulting in ethical, financial, or reputational harm.
  • Lack of operational insight: Regulations tend to focus on high-level principles, but AI governance requires operational insights into how models perform, adapt, and interact with real-world data. Checkbox compliance, by its nature, often overlooks the ongoing monitoring and validation needed to ensure models remain safe and effective throughout their lifecycle.

Why Model Risk Management (MRM) is a superior approach

A more effective alternative is to borrow principles from Model Risk Management (MRM), an established framework used by financial institutions to manage and govern complex models. MRM is not just about ticking boxes but focuses on increasing the understanding of models through rigorous processes, policies, and standards. Here’s why an MRM-based approach is better suited to AI governance:

  • Deep understanding of models: MRM frameworks emphasize the need to understand model behavior in-depth, which is critical for AI systems that are inherently complex. By focusing on model validation, stress-testing, and performance monitoring, MRM ensures that risks such as bias, drift, and ethical issues are continuously assessed and addressed.
  • Built-in regulatory compliance: Just as in finance, where following a well-designed MRM framework ensures compliance with regulations like Basel II or III, an MRM-based AI governance framework can be structured to ensure regulatory compliance as a byproduct. By embedding regulatory standards into governance policies and processes, compliance becomes automatic rather than a separate exercise.
  • Proactive risk management: While checkbox compliance is reactive, MRM encourages organizations to proactively manage risks by continuously monitoring and auditing models. This approach helps identify issues early, reducing the likelihood of model failures or regulatory breaches.
  • Adaptability to evolving risks: AI regulations are still evolving. An MRM-based framework is adaptable by design, meaning organizations can easily update their policies and procedures to stay compliant with new regulations. This flexibility is crucial as the AI regulatory landscape continues to mature.

Addressing technological hurdles: Observability over interpretability

One common criticism of applying MRM principles to AI governance is the challenge of model interpretability. Many AI systems, especially deep learning models, function as “black boxes,” making it difficult to apply traditional interpretability methods required by regulations. However, MRM frameworks already deal with opaque vendor models in finance, and similar principles can be applied to AI governance by focusing on observability.

  • Observability: Instead of trying to interpret every aspect of a black-box model, observability focuses on monitoring the model’s outputs, performance metrics, and behavior in real time. By closely tracking the inputs and outputs, organizations can detect anomalies, bias, or drift without needing to fully understand the inner workings of the model.
  • Enhanced monitoring and validation: MRM frameworks emphasize stringent monitoring, especially when models are non-interpretable. Combining observability with advanced monitoring tools and automated validation processes can mitigate risks in AI systems, ensuring compliance without sacrificing the benefits of complex models.

Challenges and hybrid solutions

While MRM-based governance has clear advantages, it’s not without challenges. Implementing this framework requires more resources, both in terms of human expertise and technical infrastructure, than a simple checkbox compliance tool. Smaller organizations or those without deep AI expertise may find it difficult to scale this approach.

However, the long-term benefits, reduced model risk, better compliance, and improved model performance, justify the investment. Additionally, hybrid approaches, where observability complements interpretability, can provide a balanced solution for organizations struggling to explain complex models while still managing risk.

Conclusion

AI systems are evolving rapidly, and so are the regulatory frameworks governing them. Relying on checkbox compliance may give organizations a false sense of security while failing to address the real risks associated with deploying AI at scale. By adopting a Model Risk Management (MRM) approach, organizations can embed compliance into their governance processes, achieve a deeper understanding of their AI systems, and more effectively mitigate risk.

It’s time for AI governance to move beyond ticking boxes and towards building a foundation of responsible, risk-managed AI.

About the author

jos_gheerardyn

Jos Gheerardyn has built the first FinTech platform that uses AI for real-time model testing and validation on an enterprise-wide scale. A zealous proponent of model risk governance & strategy, Jos is on a mission to empower quants, risk managers and model validators with smarter tools to turn model risk into a business driver. Prior to his current role, he has been active in quantitative finance both as a manager and as an analyst.

Subscribe to the Yields Newsletter

Stay ahead with expert articles on MRM and AI risk topics, in-depth whitepapers, and Yields company updates.