Article

Global AI Regulations and Governance

Global AI Regulations and Governance: Key Standards and Who Oversees AI
Global AI Regulations and Governance
February 20, 2026
Model Risk Management
Model Risk Management regulation

Who Oversees AI Governance Globally?

As artificial intelligence becomes increasingly embedded in business processes, products, and decision-making, governments and international bodies are developing frameworks to ensure that AI systems are safe, trustworthy, and used responsibly. Unlike traditional regulated sectors, AI governance spans multiple industries and use cases, creating a complex and evolving landscape for corporates worldwide.

AI governance is currently shaped by a mix of international “soft law” (principles and standards) and “hard law” (binding regulations). In practice, these layers work together: standards provide common approaches for governance and risk management, while regulations impose legal obligations on organisations developing or using AI in specific contexts or jurisdictions.

International Standardisation and Reference Frameworks: Establishing Common Practices

International standardisation organisations play a central role in establishing shared terminology, management frameworks, and technical requirements for AI systems. Bodies such as ISO and IEC develop voluntary standards that help organisations structure AI governance, risk management, lifecycle controls, and continuous improvement. ISO/IEC 42001, for example, specifies requirements for an Artificial Intelligence Management System (AIMS) that organisations can implement to support responsible development and use of AI.

In parallel, national standards bodies and public-sector institutions also shape widely adopted reference frameworks. The U.S. National Institute of Standards and Technology (NIST), for instance, publishes practical frameworks to help organisations identify, assess, and manage AI risks across the AI lifecycle. These frameworks are not laws, but they are frequently used as common “control language” by organisations and increasingly referenced in governance and compliance programmes.

Policy and Monitoring Bodies: Coordinating Expectations Without a Single Global Regulator

At a global level, there is currently no single overarching regulator for AI. Instead, international and multilateral bodies contribute guidance and convergence through principles, policy recommendations, and monitoring initiatives, while enforcement remains the responsibility of national or regional authorities wherever binding regulations exist. 

As a result, corporates typically face a multi-layered environment: global principles and standards provide direction and common practices, but legal obligations and enforcement mechanisms depend on the jurisdictions in which AI systems are developed, deployed, or have impact.

Brief Summaries of Key AI Governance Frameworks and Regulations

ISO/IEC 42001: A Management System Approach to AI Governance

ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). It is designed for organisations that provide or use AI-based products and services and aims to embed AI governance into familiar management-system disciplines (policies, roles and responsibilities, controls, internal review, and continual improvement).

The philosophy of ISO/IEC 42001 is therefore organisational and operational: it helps corporates build repeatable governance processes across AI use cases, align stakeholders (risk, compliance, IT, business), and demonstrate consistent management of AI-related risks over time.

NIST AI Risk Management Framework (AI RMF): A Risk Function Model Across the AI Lifecycle

The NIST AI Risk Management Framework (AI RMF) provides guidance to better manage risks to individuals, organisations, and society associated with AI systems. It is structured around four core functions—Govern, Map, Measure, and Manage—to organise risk management activities across the AI lifecycle.

Compared with a management-system standard, NIST AI RMF is more explicitly “risk-workflow” oriented. It focuses on practical risk identification, context mapping, measurement/assessment, and mitigation activities that can be applied to specific AI systems and use cases. Many organisations use it to define controls and evidence that AI risks are understood and managed, even when regulatory obligations differ across jurisdictions.

AI Regulations with Broad or Extraterritorial Scope

Alongside voluntary frameworks, binding AI regulations are emerging that can have broad, cross-border impact. These regimes often adopt a risk-based approach, imposing stricter requirements on AI systems that may affect safety, fundamental rights, or significant societal outcomes.

A leading example is the EU AI Act, which applies not only to organisations established in the EU, but also to certain organisations outside the EU where AI outputs are used in the Union. It is designed to promote human-centric and trustworthy AI while ensuring a high level of protection of health, safety, and fundamental rights.

For corporates operating globally, the practical implication is that AI governance programmes increasingly need to support requirements such as documentation, risk management, transparency, human oversight, and post-deployment monitoring—not only to meet local obligations, but also to manage cross-border exposure where AI-enabled products or services reach regulated markets.

Local and Sector-Specific AI Regulations

In addition to broad AI regimes, a growing number of jurisdictions are introducing local or sector-specific rules, often targeting particular risk areas such as employment, consumer protection, discrimination, or transparency for certain AI applications.

Examples include regulations at city or state level in the United States, such as New York City’s rules governing automated employment decision tools (including bias audit and notice requirements), as well as state-level frameworks like Colorado’s SB24-205, which introduces obligations for developers and deployers of “high-risk” AI systems to mitigate foreseeable risks of algorithmic discrimination (effective from 2026).

Outside the US, binding requirements may also emerge through targeted rules addressing specific AI activities. For example, China has implemented enforceable measures governing algorithmic recommendation services and “deep synthesis” technologies, and has also introduced regulatory requirements affecting generative AI services.

In parallel, some jurisdictions—such as Singapore—have published widely used guidance frameworks to support responsible adoption (even when not framed as AI-specific “hard law”). These can still shape corporate expectations through procurement, sectoral guidance, and supervisory practice.

In Oceania, governments have consulted on potential mandatory “guardrails” for high-risk AI settings, signalling a direction of travel toward more formal requirements even where the regulatory approach may be phased or built on existing legal frameworks.

Cross-Sector Digital, Data and AI Governance Requirements

AI governance does not exist in isolation. Corporates increasingly need to align AI-specific controls with broader digital, cybersecurity, and data governance requirements. These can affect AI systems indirectly through expectations on data access and quality, cybersecurity risk management, third-party dependencies, and accountability for automated systems.

In the EU, for example, the Data Act introduces harmonised rules on fair access to and use of data—an important development for data-driven products and AI-enabled services. While not an AI law, it reinforces the need for stronger data governance and data-sharing controls as part of a broader compliance and risk posture.

Together, these developments push corporates toward integrated governance models in which AI risk management connects with enterprise risk, compliance, security, and data governance—supporting both responsible AI adoption and readiness for evolving regulatory expectations.

Navigating the AI Regulatory Landscape

The global AI regulatory environment is evolving rapidly, with new standards, laws, and supervisory expectations emerging each year. For corporates, understanding this landscape is no longer only a legal concern; it is becoming a strategic issue affecting innovation, trust, and operational resilience.

Establishing clear AI governance, aligning with recognised frameworks, and preparing for increasing regulatory scrutiny are increasingly essential steps for organisations seeking to deploy AI responsibly and at scale.

About the

Author(s)

Sébastien Viguié Yields
Sebastien Viguie
QA Tester & Co-founder

Sébastien Viguié is the co-founder of Yields, the first FinTech platform leveraging AI for enterprise-scale model testing and validation. A strong advocate of model risk governance and strategy, he focuses on helping financial institutions embed trust, transparency, and compliance into their AI and model lifecycle. Previously CISO at Yields, Sébastien gained hands-on experience reconciling cybersecurity principles with model risk management and AI governance, a perspective he now extends to emerging regulatory frameworks such as ISO, NIST, and the EU AI Act.Before founding Yields, he worked as a front-office quantitative analyst at BNP Paribas, where he developed a deep understanding of model development and validation in fast-paced trading environments, expertise that continues to inform his pragmatic approach to responsible AI and risk management today.

Share
Start your AI Governance journey today

Related Articles

Glossary

What is risk management?

Read more
Customer Story

Automated Testing and Documentation at G-SIB

Read more
Article

Understanding EU Banking Regulations

Read more