As AI systems become more embedded in business processes, the question isn’t whether they pose risks, it’s how we systematically assess and manage those risks. This is where AI risk assessment plays a critical role. While risk tiering looks at the impact on the organization (e.g. financial exposure, complexity), AI risk assessment focuses on the potential impact on society, aligning directly with the principles and obligations of the EU AI Act.
A Regulatory Framework for AI Risk
The EU AI Act introduces a risk-based classification that segments AI systems into three categories:
1. Unacceptable Risk
These AI systems are outright banned due to their clear threat to human rights, safety, or democratic values.
Examples include:
- Social scoring by governments
- Real-time biometric identification in public spaces (except specific law enforcement cases)
- Manipulative systems targeting vulnerable groups (e.g. children)
🔒 Governance implication: These systems must not be developed or deployed under any circumstance.
2. High Risk
AI systems that affect people’s safety, rights, or access to essential services fall into this category.
Examples include:
- CV screening and employment-related AI
- Credit scoring models
- AI in law enforcement, justice, or migration
- AI-enabled medical devices
📋 Governance implication: These systems must comply with strict requirements, including:
- Risk and quality management
- Technical documentation
- Data governance practices
- Human oversight
- Post-market monitoring
- Potential registration in the EU public AI database
3. Non-High-Risk (Minimal Risk)
All other AI systems fall into this category. While not subject to the same legal obligations, organizations are still encouraged to implement internal governance and ethical safeguards.
🧭 Governance implication: Voluntary controls may include internal codes of conduct, model documentation, and oversight frameworks.
Why Assess All AI Use Cases?
A key (and often overlooked) requirement of the EU AI Act is that every AI system, regardless of risk level, must undergo this categorization. Even if a system is ultimately deemed “minimal risk,” organizations must demonstrate that the assessment was completed and documented.
In practice, this means:

eBook
Ready to Take Control of AI Risk?
Download our practical guide to AI governance, built on a decade of real-world experience. Discover how to operationalize AI governance with clarity, structure, and confidence.
Author

Jos Gheerardyn has built the first FinTech platform that uses AI for real-time model testing and validation on an enterprise-wide scale. A zealous proponent of model risk governance & strategy, Jos is on a mission to empower quants, risk managers and model validators with smarter tools to turn model risk into a business driver. Prior to his current role, he has been active in quantitative finance both as a manager and as an analyst.





