What is Risk Tiering?
Risk tiering is the process of ranking AI use cases according to their potential impact on your organisation. Borrowed from Model Risk Management (MRM) in the financial sector, this approach helps you determine the level of governance rigour each system requires.
Unlike broader AI risk assessments, which consider societal, ethical, or environmental implications, risk tiering focuses on internal criticality:
- How essential is this system to operations, compliance, and business continuity?
- How complex is it to understand, validate, and monitor?
The combination of these two dimensions, materiality and complexity, provides a structured, repeatable way to prioritise governance resources.
Dimension 1: Materiality
Materiality measures the importance or potential impact of an AI system on the organisation. High materiality use cases tend to:
- Influence revenue, pricing, or capital
- Be subject to significant regulatory oversight
- Affect customer decisions, such as eligibility or risk classification
- Play a role in core products, decision processes, or infrastructure
If such systems fail or underperform, they can trigger severe financial, operational, or compliance consequences, making strong validation, documentation, and monitoring essential.
Dimension 2: Complexity
Complexity reflects how difficult it is to understand, validate, and monitor the system. Factors include:
- The model type and architecture (e.g., deep learning vs. linear regression)
- Data volume, diversity, and quality
- Use of ensemble models or “black box” algorithms
- Frequency of updates or retraining
- Level of automation and integration in live systems
High-complexity systems carry a greater risk of unintended outcomes or governance failures, particularly if their decision logic is opaque or their performance is sensitive to subtle changes in input data.
From Dimensions to Tiers
By assessing both materiality and complexity, you can assign each AI use case to a risk tier, typically:
- Tier 1 (High Risk): High materiality, high complexity, requires the most stringent controls and independent validation.
- Tier 2 (Moderate Risk): Either high materiality/low complexity or low materiality/high complexity, requires proportionate governance measures.
- Tier 3 (Low Risk): Low materiality, low complexity, can be monitored with lighter processes.
This proportional approach ensures resources are allocated efficiently, without neglecting lower-risk systems that still require basic oversight.
Why Risk Tiering Matters
With AI adoption accelerating, it’s unrealistic to apply the same governance controls to every system. Risk tiering makes governance scalable by:
- Concentrating effort where failures would be most damaging
- Aligning governance rigour with regulatory expectations
- Creating a transparent, auditable prioritisation framework
It’s the bridge between knowing your AI inventory and managing it effectively.
eBook
Ready to Take Control of AI Risk?
Download our practical guide to AI governance, built on a decade of real-world experience. Discover how to operationalize AI governance with clarity, structure, and confidence.
Author
Jos Gheerardyn has built the first FinTech platform that uses AI for real-time model testing and validation on an enterprise-wide scale. A zealous proponent of model risk governance & strategy, Jos is on a mission to empower quants, risk managers and model validators with smarter tools to turn model risk into a business driver. Prior to his current role, he has been active in quantitative finance both as a manager and as an analyst.