Article

The three levels of AI risk you need to know

Under the EU AI Act, every AI system must be assessed, not just the high-risk ones.
High Risk
November 17, 2025
AI governance
AI risk management

As AI systems become more embedded in business processes, the question isn’t whether they pose risks, it’s how we systematically assess and manage those risks. This is where AI risk assessment plays a critical role. While risk tiering looks at the impact on the organization (e.g. financial exposure, complexity), AI risk assessment focuses on the potential impact on society, aligning directly with the principles and obligations of the EU AI Act.

A Regulatory Framework for AI Risk

The EU AI Act introduces a risk-based classification that segments AI systems into three categories:

1. Unacceptable Risk

These AI systems are outright banned due to their clear threat to human rights, safety, or democratic values.
Examples include:

  • Social scoring by governments
  • Real-time biometric identification in public spaces (except specific law enforcement cases)
  • Manipulative systems targeting vulnerable groups (e.g. children)

Governance implication: These systems must not be developed or deployed under any circumstance.

2. High Risk

AI systems that affect people’s safety, rights, or access to essential services fall into this category.
Examples include:

  • CV screening and employment-related AI
  • Credit scoring models
  • AI in law enforcement, justice, or migration
  • AI-enabled medical devices

Governance implication: These systems must comply with strict requirements, including:

  • Risk and quality management
  • Technical documentation
  • Data governance practices
  • Human oversight
  • Post-market monitoring
  • Potential registration in the EU public AI database

3. Non-High-Risk (Minimal Risk)

All other AI systems fall into this category. While not subject to the same legal obligations, organizations are still encouraged to implement internal governance and ethical safeguards.

Governance implication: Voluntary controls may include internal codes of conduct, model documentation, and oversight frameworks.

Why Assess All AI Use Cases?

A key (and often overlooked) requirement of the EU AI Act is that every AI system, regardless of risk level, must undergo this categorization. Even if a system is ultimately deemed “minimal risk,” organizations must demonstrate that the assessment was completed and documented.

In practice, this means:

  • Stronger internal awareness of how AI systems affect external stakeholders
  • No shortcuts based on assumptions about risk levels
  • Full auditability across your AI inventory

About the

Author(s)

Jos Gheerardyn Yields
Jos Gheerardyn
CEO and Co-founder

Jos Gheerardyn is the co-founder and Chief Executive Officer (CEO) of Yields. Prior to his current role, he worked as both a manager and an analyst in the field of quantitative finance. With nearly 20 years of experience, he has worked with leading international investment banks and start-up companies. Jos is the author of multiple patents that apply quantitative risk management techniques to the energy balancing market. Jos holds a PhD in superstring theory from the University of Leuven.

Share

Subscribe to the Yields Newsletter

If you would like to know more about Yields AI Governance or Model Risk management solution. Let’s get in touch!

Related Articles

Article
E-book

RACI Matrix: Defining Accountability in AI Governance

Read more
Article

Challenges of Deploying AI Models

Read more
Glossary

What is Model Issue?

Read more