The three levels of AI risk you need to know

Not all AI systems pose the same risks. From banned applications to high-risk use cases under strict regulation, here’s what you need to know about assessing AI risk.

AI Risk

As organisations adopt AI at scale, one of the central challenges is managing its risks, not only for internal operations, but also for individuals, society, and the environment. While risk tiering helps determine how critical an AI system is to your own business, AI risk assessment looks outward. It asks: what impact could this system have on people and society at large?

This distinction is crucial because regulatory frameworks, including the EU AI Act, require organisations to classify AI systems based on their societal risks and apply governance measures accordingly.

Three levels of AI risk

The EU AI Act introduces a risk-based categorisation of AI systems into three levels:

1. Unacceptable risk

Some AI systems are simply prohibited because they pose a clear threat to safety, rights, or democratic values.
Examples include:

  • Government-run social scoring systems
  • Real-time biometric identification in public spaces (with limited law enforcement exceptions)
  • Manipulative AI designed to exploit vulnerable groups, such as children

Governance action: ensure that no AI use cases fall into this category.

2. High risk

These systems have a significant impact on people’s rights, safety, or access to essential services. They are allowed, but only under strict conditions.
Examples include:

  • Employment-related AI (e.g. CV screening tools)
  • Credit scoring and creditworthiness assessments
  • AI-enabled medical devices
  • AI used in law enforcement, justice, education, or migration control

Governance requirements include:

  • A risk and quality management system
  • Detailed technical documentation
  • Robust data governance practices
  • Human oversight measures
  • Post-market monitoring

Governance action: high-risk systems must undergo conformity assessment, maintain compliance documentation, and in some cases be registered in the EU’s public database.

3. Non-High risk

All other AI systems fall into this category. While the EU AI Act does not impose specific mandatory requirements here, organisations are encouraged to apply voluntary governance measures to strengthen trust, transparency, and accountability.

Governance action: document that a risk assessment has been performed, even if no high-risk obligations apply.

Why AI risk assessment matters

AI risk assessment is more than a regulatory checkbox. It ensures that your organisation:

  • Protects fundamental rights by identifying systems that could harm individuals or society
  • Builds trust with customers, regulators, and the public through proactive governance
  • Demonstrates compliance by documenting that every AI use case has been assessed, including those classified as non-high risk

As regulators increase scrutiny, being able to show a clear, auditable record of AI risk assessments will become a critical element of responsible AI adoption.

managing AI Risk in Practice

eBook

Ready to Take Control of AI Risk?

Download our practical guide to AI governance, built on a decade of real-world experience. Discover how to operationalize AI governance with clarity, structure, and confidence.

Author

jos_gheerardyn

Jos Gheerardyn has built the first FinTech platform that uses AI for real-time model testing and validation on an enterprise-wide scale. A zealous proponent of model risk governance & strategy, Jos is on a mission to empower quants, risk managers and model validators with smarter tools to turn model risk into a business driver. Prior to his current role, he has been active in quantitative finance both as a manager and as an analyst.