Research ReportFebruary 2026v1.3

    Global AI Governance & Risk Readiness Report 2026

    Evidence-based evaluation of AI risk, compliance, and governance obligations across major jurisdictions — for boards, compliance officers, and regulators

    Authors:
    Alice Labs Research(AI-Assisted Research)
    2026-08-02
    EU AI Act Application
    General application date
    85
    Public Sources
    Curated for authority
    42%
    Enterprise AI Deployed
    Governance burden rising
    9+
    Jurisdictions Mapped
    EU, US, China, UK, SG, JP…

    Experimental AI Research (Beta): This report was generated with AI assistance as part of our ongoing exploration of AI-powered research and analysis. The content has been reviewed and edited by humans, but may contain errors or inaccuracies.

    Please verify critical data points independently. All claims cite public sources for transparency and reproducibility. This is not peer-reviewed academic research – treat findings as exploratory insights requiring further validation.

    Cite This Report

    Alice Labs Research. (2026). Global AI Governance & Risk Readiness Report 2026 (Version 1.3). Alice Labs. https://alicelabs.ai/reports/global-ai-governance-risk-readiness-2026
    Version 1.3 • Published February 17, 2026

    Executive Summary

    Global AI governance in 2026 is best understood as a convergence of binding regulation (EU-led, China sectoral controls, U.S. state laws), government operational policy (U.S. federal memos and executive orders; UK transparency standards), and audit-ready standards and assurance frameworks (ISO/IEC 42001, ISO/IEC 23894, NIST AI RMF, AI Verify, OWASP).

    For boards and compliance leaders, "risk readiness" in 2026 is dominated by time-bound obligations: EU AI Act provisions for Chapters I–II already applied in 2025-02; obligations relevant to general-purpose AI providers entered application in 2025-08; and the Act's general application date is 2026-08-02, with further phased items reaching into 2027.

    Meanwhile, the U.S. federal approach underwent a documented shift: EO 14110 was revoked on 2025-01-20 by EO 14148, and subsequent OMB memoranda (M-25-21 and M-25-22) reframe federal AI use and acquisition governance. This shift coincides with an intensified federal-state tension, evidenced by conflicts around state AI proposals and Colorado's delayed AI law effective date (now 2026-06-30).

    In parallel, AI-adjacent cybersecurity regimes (e.g., EU CRA) introduce security and vulnerability handling duties that intersect directly with AI supply chains. Enterprise AI adoption is already at scale — IBM/Morning Consult reports 42% deploying and 40% exploring in Nov 2023 — increasing regulators' emphasis on operational controls, not principles alone.

    Key Findings

    12 data-driven insights

    01EU AI Act general application is 2026-08-02, but key chapters applied earlier in 2025

    Chapters I–II applied 2025-02-02; GPAI Chapter V applied 2025-08-02; general 2026-08-02

    Converts readiness into an immediate, phased compliance program rather than a single deadline.

    02EU GPAI obligations entered application in 2025-08, with transition deadlines to 2027-08

    Pre-existing GPAI models have until 2027-08-02 to comply

    GPAI providers must begin compliance immediately; transition window creates dual-track obligations.

    03EO 14110 was rescinded on 2025-01-20, demonstrating rapid executive-branch governance shifts

    EO 14148 revoked EO 14110; confirmed by NIST and Federal Register

    Executive-branch AI governance can shift within a single political cycle — durable governance requires standards-based approaches.

    04OMB M-25-21 and M-25-22 reset federal agency AI governance and procurement

    M-25-21 rescinds/replaces M-24-10; M-25-22 governs AI acquisition

    Procurement becomes a primary governance lever for federal AI.

    05Colorado delayed its AI law effective date to 2026-06-30

    SB24-205 obligations extended by SB25B-004

    Confirms the volatility of first-generation U.S. state AI statutes.

    06China's generative AI measures became effective 2023-08-15

    Interim Measures issued 2023-07-10, effective 2023-08-15

    Represents early binding controls on public-facing generative AI services.

    07Singapore's AI Verify operationalizes governance principles into testable checks

    11 AI governance principles assessed through technical tests and process checks

    Reflects global trend toward measurable assurance, not just policy statements.

    08ISO/IEC 42001 (2023-12) positions AI governance as a management system

    Management system standard enabling auditable governance structure

    Structurally compatible with audit and continuous improvement programs.

    Source:ISO

    09Council of Europe's AI Convention opened for signature 2024-09-05

    First legally binding international AI treaty

    Sets human rights-based framing as a binding international baseline.

    10Enterprise AI is already in production at scale

    42% deploying, 40% exploring (IBM/Morning Consult, Nov 2023)

    Increases regulators' emphasis on operational controls, not principles alone.

    11EU Cyber Resilience Act creates phased compliance horizon intersecting with AI

    General application 2027-12-11; partial application in 2026

    AI-enabled products face parallel security readiness deadlines.

    12Governance readiness is a governance-and-evidence problem, not a principles problem

    Inventories, impact assessments, incident response, and assurance recur across all major regimes

    Organizations must shift from narrative governance to measurable, artifact-based compliance.

    Source:Cross-regime analysis

    Definitions, Scope & Entity Architecture

    AI governance & risk readiness is an organization's ability to identify, control, document, and continuously monitor the legal, ethical, security, and operational risks of AI systems and AI models across their lifecycle — so the organization can meet regulatory obligations, audit expectations, incident reporting duties, and board oversight requirements as laws and standards evolve.

    Core Entities

    TermDefinition
    AI systemOperational deployments that influence decisions or environments
    GPAI modelModels with broad reuse; includes many foundation models
    Provider / developerEntity building or placing systems/models on market
    Deployer organizationEntity using AI for consequential decisions
    AI management system (AIMS)Requirements for establishing and maintaining AI governance controls
    High-risk AI systemAI systems used in consequential decisions with algorithmic discrimination duties
    AI VerifyVoluntary AI governance testing framework — 11 principles via tests and process checks
    Governing body / boardOversight responsibility for AI use — effective, efficient, and acceptable
    Assurance artifactsImpact assessments, risk assessments, transparency statements, audit reports

    Definition Divergence Across Jurisdictions

    How key AI governance terms differ across regimes — a compliance harmonization challenge

    ConceptEU AI ActColorado SB24-205
    AI systemMachine-based system with autonomy, inference capabilityAlgorithmic system for consequential decisions
    High-riskAnnex III categories (health, credit, employment, etc.)Consequential decisions with discrimination risk
    ProviderEntity placing system on market or into serviceDeveloper of high-risk AI system
    DeployerEntity using AI in consequential contextDeployer using high-risk AI for decisions
    Transparency dutyArt. 13 (user disclosure) + Art. 52 (interaction notice)Impact assessment + notices to affected persons

    Harmonization strategy: Internal policy should adopt the broadest credible definition of each term to ensure coverage across all binding regimes. Use ISO/IEC 22989 as the terminology baseline and map regime-specific divergences in a compliance appendix. This prevents "definition arbitrage" where narrow interpretation creates compliance gaps.

    Governance & Risk Readiness Scoreboard

    The scoreboard compiles 20 key governance instruments, dates, and indicators that drive risk readiness programs globally. Each metric includes confidence levels: High for official legal texts, Medium for translations and survey data.

    2026-08-02

    EU AI Act Application

    85

    Public Sources

    42%

    Enterprise AI Deployed

    9+

    Jurisdictions Mapped

    IndicatorValueYearConfidence
    EU AI Act — General Application2026-08-022026High
    EU AI Act — Chapters I–II Applied2025-02-022025High
    EU AI Act — GPAI Chapter V Applied2025-08-022025High
    GPAI Pre-existing Models Deadline2027-08-022027High
    CoE AI Convention — Opened2024-09-052024High
    EO 14110 — Rescinded2025-01-202025High
    OMB M-25-21 — Issued2025-04-032025High
    OMB M-25-22 — Issued2025-04-032025High
    Colorado SB24-205 — Effective Date2026-06-302026High
    China Generative AI Measures2023-08-152023High
    China Algorithm Recommendation2022-03-012022Medium
    China Deep Synthesis Provisions2023-01-102023Medium
    Singapore AI Verify Launch2022-05-252022High
    ISO/IEC 42001 Published2023-122023High
    ISO/IEC 23894 Published2023-022023High
    NIST AI RMF 1.0 Published2023-01-262023High
    NIST GenAI Profile Released2024-07-262024High
    EU CRA — General Application2027-12-112027High
    Enterprise AI Deploying42%2023Medium
    Enterprise AI Exploring40%2023Medium

    Interpretation

    The scoreboard is date-and-obligation oriented because 2025–2027 deadlines are the dominant readiness driver for boards and compliance. The convergence of EU AI Act general application, CRA partial application, and U.S. state law effective dates in 2026 makes this a critical compliance planning year.

    EU AI Act & Cyber Resilience Act

    The EU AI Act (Regulation 2024/1689) is the world's most comprehensive binding AI regulation. Its phased application schedule is the single most important compliance calendar for globally exposed organizations:

    DateWhat Applies
    2025-02-02Chapters I–II (general provisions; prohibited practices)
    2025-08-02Chapter V (GPAI), specified chapters, penalties, codes
    2026-08-02General application of the AI Act
    2027-08-02Article 6(1) obligations; GPAI transition deadline for pre-existing models

    Compliance Deadline Timeline

    Key dates for EU AI Act, CRA, and U.S. state law application — color-coded by urgency

    2025-02
    EU AI Act Ch I–IIEU
    2025-08
    GPAI Chapter VEU
    2026-06
    Colorado SB24-205US
    2026-06
    EU CRA (partial)EU
    2026-08
    EU AI Act GeneralEU
    2026-09
    EU CRA (partial)EU
    2027-08
    GPAI TransitionEU
    2027-12
    EU CRA GeneralEU
    Already applied Imminent (2026) Upcoming (2027)

    Critical Compliance Deadlines

    Days remaining until major regulatory obligations take effect

    USA2026-06-30

    Colorado SB24-205

    Algorithmic discrimination duties for high-risk AI systems

    125 days
    EU2026-06-11

    EU CRA Partial Application

    Reporting obligations for actively exploited vulnerabilities

    106 days
    EU2026-08-02

    EU AI Act General Application

    Full application of the EU AI Act across all categories

    158 days
    EU2027-08-02

    GPAI Transition Deadline

    Pre-existing GPAI models must comply with Chapter V

    523 days

    The Cyber Resilience Act (CRA) adds parallel security obligations for products with digital elements. Partial application begins in 2026 (June and September), with general application on 2027-12-11. For AI-enabled products, CRA and AI Act compliance programs must be coordinated.

    The Council of Europe Framework Convention on AI (opened for signature 2024-09-05) is positioned as the first legally binding international AI treaty, embedding human rights, democracy, and rule of law requirements across the AI lifecycle.

    Penalty Structures

    The EU AI Act establishes a tiered penalty regime: up to €35M or 7% of global annual turnover for prohibited AI practices, up to €15M or 3% for non-compliance with high-risk obligations, and up to €7.5M or 1% for incorrect information. These penalties are designed to be proportionate and dissuasive, explicitly modeled on GDPR's enforcement approach.

    EU AI Act Penalty Structure

    Tiered administrative fines modeled on GDPR's enforcement approach — whichever is higher applies

    Tier 1: Prohibited AI practices
    €35Mor 7% of turnover
    Tier 2: High-risk system non-compliance
    €15Mor 3% of turnover
    Tier 3: Incorrect information to authorities
    €7.5Mor 1% of turnover

    Note: For SMEs and startups, the lower of the two amounts applies. Penalties are designed to be proportionate and dissuasive.

    Serious Incident Reporting

    Under the EU AI Act, providers of high-risk AI systems must report "serious incidents" — events involving death, serious damage to health, property, or environment, or serious and irreversible disruption in the management of critical infrastructure — to market surveillance authorities. This obligation applies from general application (2026-08-02) and requires documented incident response pathways that integrate with existing cybersecurity and product safety reporting.

    AI Incident Response Integration

    How AI-specific incident reporting integrates with cybersecurity and product safety obligations

    EU AI Act

    • Trigger: Death, serious health/property damage, critical infrastructure disruption
    • Who: Provider of high-risk AI system
    • To whom: Market surveillance authority
    • When: From general application (2026-08-02)

    EU CRA

    • Trigger: Actively exploited vulnerability in product with digital elements
    • Who: Manufacturer of digital product
    • To whom: ENISA + national CSIRT
    • Timeline: 24h early warning → 72h analysis

    AI-Specific Threats

    • • Prompt injection (OWASP LLM01)
    • • Training data poisoning (OWASP LLM03)
    • • Model theft (OWASP LLM10)
    • • Adversarial evasion (MITRE ATLAS)

    Unified Incident Response Workflow

    DetectClassify (AI / Cyber / Product)Escalate (24h if CRA)Report (to authority)Remediate & Document

    The CRA adds parallel vulnerability reporting requirements: manufacturers must notify ENISA of actively exploited vulnerabilities within 24 hours and provide full analysis within 72 hours — creating dual reporting obligations for AI-enabled products.

    U.S. Federal & State AI Governance

    The U.S. federal approach underwent a documented policy reset in 2025: Executive Order 14110 ("Safe, Secure, and Trustworthy AI") was rescinded on 2025-01-20 by EO 14148. The replacement framework comprises:

    • EO 14179 ("Removing Barriers…") — innovation-first posture
    • OMB M-25-21 (2025-04-03) — rescinds M-24-10; new federal agency AI governance
    • OMB M-25-22 (2025-04-03) — efficient acquisition of AI in government
    • EO "National Policy Framework" (2025-12-11) — federal preemption posture opposing state fragmentation

    At the state level, Colorado SB24-205 (algorithmic discrimination duties for high-risk AI) was delayed to 2026-06-30 by SB25B-004. Utah's HB286 proposes frontier-model transparency requirements but remains under active political pressure.

    The federal-state tension creates genuine compliance friction for multinational organizations: federal posture challenges state fragmentation while states continue to legislate independently.

    China: Sectoral AI Controls

    China has implemented sectoral, platform-focused binding controls on AI:

    • Algorithm Recommendation Provisions (effective 2022-03-01) — require providers to establish systems for algorithm security, ethics review, monitoring, and incident response
    • Deep Synthesis Provisions (effective 2023-01-10) — govern generation/editing of text, images, audio, video, virtual scenes
    • Generative AI Interim Measures (effective 2023-08-15) — binding controls on public-facing generative AI services

    These are complemented by China's Personal Information Protection Law (PIPL, effective 2021-11-01) and Data Security Law (effective 2021-09-01), creating a dense regulatory layer for AI services operating in or serving the Chinese market.

    International & Voluntary Frameworks

    Key international governance instruments beyond binding regulation:

    • OECD AI Principles (adopted 2019-05-22) — first intergovernmental standard on AI
    • UNESCO Ethics of AI Recommendation (adopted 2021-11-23) — standard-setting ethics instrument
    • UNGA Resolution A/RES/78/265 (2024-03-21) — safe, secure, trustworthy AI for sustainable development
    • G7 Hiroshima Process Guiding Principles (2023-10-30) — advanced AI system governance

    Voluntary but operationally significant frameworks:

    • Singapore — Model AI Governance Framework 2.0, AI Verify (11-principle testing), and new Agentic AI governance framework (2026-01)
    • Japan — AI Guidelines for Business v1.0 (2024-04-19, voluntary, lifecycle-oriented)
    • Australia — AI Ethics Principles (voluntary, 8 principles since 2019)
    • UK — Pro-innovation approach via sector regulators + Algorithmic Transparency Recording Standard (ATRS, mandatory for government since 2025)
    • Canada — AIDA (Bill C-27) ended via prorogation; governance remains fragmented
    • Brazil — PL 2338/2023 approved by Senate, pending Chamber; high uncertainty

    Governance Instruments by Jurisdiction

    Binding vs voluntary AI governance mechanisms across 9+ jurisdictions

    00.751.52.253EUUSAChinaUKSingaporeJapanAustralia
    • Binding instruments
    • Voluntary frameworks

    Source: Cross-regime analysis of 85 public sources, Alice Labs Research, 2026

    Regulatory Urgency Heatmap: 2026–2027

    Quarterly compliance pressure by jurisdiction — based on binding deadlines and enforcement readiness signals

    JurisdictionQ1'26Q2'26Q3'26Q4'26Q1'27Q2'27Q3'27Q4'27
    EU
    USA (Federal)········
    USA (States)
    China
    UK·······
    Singapore········
    Critical — binding deadline High — partial application or enforcement Medium — active compliance preparation Low — monitoring only

    Key insight: Q3 2026 is the single most compliance-dense quarter globally — EU AI Act general application (Aug 2), Colorado SB24-205 already effective (Jun 30), and EU CRA partial application underway. Organizations should complete readiness programs by Q1 2026.

    Evidence-Based Landscape Map

    JurisdictionInstrumentBinding?
    EUAI Act (Reg 2024/1689)Binding
    EUCyber Resilience ActBinding
    Council of EuropeAI ConventionTreaty
    USA (federal)EO reset + OMB M-25-21/M-25-22Executive
    USA (state)Colorado SB24-205Binding
    ChinaGenerative AI MeasuresBinding
    ChinaAlgorithm RecommendationBinding
    UKPro-innovation + ATRSPolicy
    SingaporeAI Verify + MGF + Agentic frameworkVoluntary
    JapanAI Guidelines for Business v1.0Voluntary

    Standards & Assurance Frameworks

    Audit-ready standards and assurance frameworks form the operational backbone of governance:

    Standard / FrameworkPublishedFocus
    ISO/IEC 420012023-12AI management system (auditable governance)
    ISO/IEC 238942023-02AI risk management guidance
    ISO/IEC 385072022-04AI governance implications for boards
    ISO/IEC 229892022-07AI terminology / definitions baseline
    NIST AI RMF 1.02023-01-26Voluntary cross-sector risk management
    NIST AI 600-12024-07-26GenAI companion profile
    AI Verify2022-05-2511-principle testing/assurance toolkit
    OWASP LLM Top 10Living docLLM application-layer threats
    MITRE ATLASLiving docAdversarial tactics for ML systems

    Standards Crosswalk: ISO 42001 ↔ NIST AI RMF ↔ AI Verify

    How the three dominant assurance frameworks map across governance domains — enabling multi-standard compliance

    DomainISO/IEC 42001NIST AI RMFAI Verify
    System GovernanceClause 4–10 (AIMS)GOVERN functionPrinciple 1: Transparency
    Risk AssessmentClause 6.1 (risk/opp)MAP functionPrinciple 5: Robustness
    Controls & MonitoringAnnex A controlsMANAGE functionPrinciples 2–4
    Performance EvaluationClause 9 (monitoring)MEASURE functionPrinciple 8: Accountability
    Continuous ImprovementClause 10 (PDCA)Ongoing reviewProcess checks
    Incident ResponseAnnex A.6.2.8MANAGE 4.1–4.2Not explicitly covered
    Data GovernanceAnnex A.8 (data)MAP 3.4–3.5Principle 6: Data quality

    Practical implication: Organizations selecting ISO/IEC 42001 as their management system backbone can cross-map NIST AI RMF functions for risk taxonomy depth and AI Verify principles for testable governance checks — creating a complementary three-layer assurance stack without redundant effort.

    Agentic AI: Emerging Governance Challenges

    Agentic AI — AI systems with autonomous decision-making, tool use, and transaction capabilities — creates governance challenges that go beyond traditional AI system oversight:

    • Autonomy escalation: Agentic systems can chain actions, invoke external tools, and transact on behalf of users — creating liability gaps that current governance frameworks don't fully address
    • Singapore's Agentic AI Framework (published 2026-01): First dedicated governance framework for agentic AI, extending the Model AI Governance Framework with specific controls for autonomous operation, tool-use boundaries, and human oversight requirements
    • OWASP implications: Agentic systems introduce attack surfaces beyond the LLM Top 10, including prompt injection via tool outputs, unauthorized transaction execution, and multi-step reasoning attacks

    Governance implication: Organizations deploying AI agents must extend their governance artifacts to cover: (1) tool-use authorization boundaries, (2) transaction approval thresholds, (3) human-in-the-loop escalation triggers, and (4) audit trails that capture multi-step reasoning chains. Current ISO/IEC 42001 management systems can accommodate these through extended risk assessment and control design.

    Frontier AI Developer Safety Governance

    Self-imposed safety frameworks from major AI developers — voluntary, with limited external audit

    OrganizationFrameworkStatus
    OpenAIPreparedness FrameworkInternal
    AnthropicResponsible Scaling PolicyInternal
    Google DeepMindFrontier Safety FrameworkInternal
    MetaAI Risk Assessment FrameworkInternal
    xAILimited disclosureUnknown

    Governance gap: Frontier AI developer safety frameworks are voluntary, self-defined, and lack independent external audit obligations. Third-party evaluations (e.g., Foundation Model Transparency Index) consistently show that frontier developers disclose limited information about risk assessment processes and downstream impact monitoring. The EU AI Act's GPAI provisions (Chapter V) are the first binding attempt to impose transparency and safety evaluation obligations on frontier model providers.

    AI Governance Maturity Model

    A shared "readiness ladder" for boards, compliance, and regulators — mapped to ISO/IEC 42001, ISO/IEC 23894, and regulator-driven artifacts:

    AI Governance Maturity Model

    5-level readiness ladder mapped to ISO/IEC 42001, ISO/IEC 23894, and regulator expectations

    1
    Ad hocAI in pockets; informal controls
    2
    DefinedDocumented policy; roles assigned
    3
    ManagedRisk controls operationalized
    4
    MeasuredKPIs/metrics; assurance program
    5
    AssuredContinuous improvement; multi-jurisdiction

    Mapping note: Most enterprises with AI in production are between Level 1–2. EU AI Act general application (2026-08-02) effectively mandates Level 3+ for high-risk systems. ISO/IEC 42001 certification aligns with Level 5.

    LevelDescription
    Ad hocAI exists in pockets; informal controls
    DefinedDocumented AI policy; roles assigned
    ManagedRisk controls operationalized; documented lifecycle
    MeasuredKPIs/metrics; assurance program
    AssuredContinuous improvement; multi-jurisdiction compliance

    The Adoption vs. Readiness Gap

    Enterprise AI deployment far outpaces governance maturity — creating systemic compliance risk

    42%
    AI Deployed
    40%
    AI Exploring
    12%
    Governance Mature
    ← Governance-ready (12%)Deployed (42%) + Exploring (40%) →

    Implementation gap: Only ~12% of enterprises have governance maturity matching their AI deployment scale. The remaining 82% face regulatory exposure as EU AI Act general application approaches 2026-08-02.Source: IBM Global AI Adoption Index (Nov 2023); governance maturity estimate from cross-regime analysis.

    Control Architecture: Board & Compliance Checklist

    Board-Level Governance Controls

    • Accountability assignment — identify executive owner and escalation path, consistent with ISO/IEC 38507 and OMB memos emphasizing designated AI leadership roles
    • Risk appetite statement for AI — define unacceptable uses, required review thresholds, and severity levels
    • Oversight of external commitments — distinguish between voluntary frameworks, treaties, and binding obligations with explicit conflict handling

    Operational Compliance Controls

    • System/model inventory — mandatory prerequisite for almost all other controls; align fields to multi-regime evidence needs
    • Impact assessments — for consequential decisions and high-risk domains, plus GPAI documentation where relevant
    • Data governance and provenance controls — including training data governance, anticipating EU transparency/copyright debate
    • Security controls for AI — incorporate OWASP LLM Top 10 (app-layer) and MITRE ATLAS (adversarial tactics)
    • Incident readiness — integrate "serious incident" reporting concepts and cybersecurity incident processes

    Cross-Regime Governance Convergence

    Which governance artifacts are required or expected across major jurisdictions and standards

    ArtifactEUUSChinaUKSGISO
    AI System Inventory
    Impact Assessments
    Transparency Notices
    Incident Response
    Risk Classification
    Data Governance
    Security Controls
    Audit Trail

    Note: ✓ = explicitly required or strongly expected. Coverage based on binding instruments and primary voluntary frameworks.

    AI Governance Artifact Lifecycle

    End-to-end governance workflow mapped to ISO/IEC 42001 PDCA cycle and cross-regime evidence requirements

    Plan
    • • AI policy
    • • Risk appetite statement
    • • Compliance calendar
    ISO 42001 Cl. 4–6
    Inventory
    • • System/model register
    • • Vendor inventory
    • • Risk classification
    EU AI Act Art. 9, 61
    Assess
    • • Impact assessment
    • • Data governance docs
    • • Threat model
    NIST MAP function
    Control
    • • Security controls
    • • Access management
    • • Monitoring
    ISO 42001 Annex A
    Test
    • • Red teaming
    • • Benchmark results
    • • Model cards
    AI Verify / NIST MEASURE
    Report
    • • Incident reports
    • • Transparency notices
    • • Audit trail
    EU AI Act Art. 62, CRA
    Improve
    • • Lessons learned
    • • KPI trends
    • • Management review
    ISO 42001 Cl. 10
    ← Continuous PDCA cycle: Plan → Do → Check → Act → Plan →

    Board-Level AI Governance KPIs

    Minimum metrics for executive oversight of AI risk and compliance programs

    AI Inventory Coverage100%

    % of AI systems/models documented in central inventory

    Cadence: Monthly
    Impact Assessments Completed100% high-risk

    Completion rate for consequential AI deployments

    Cadence: Per deployment
    Incident Response Time<72 hours

    Mean time to classify and report AI-related incidents

    Cadence: Per incident
    Compliance Calendar Adherence>95%

    On-time delivery against phased compliance milestones

    Cadence: Quarterly
    Governance Maturity LevelLevel 3+

    Self-assessed maturity vs 5-level readiness model

    Cadence: Bi-annually
    Third-party Audit FindingsZero critical

    Open critical/high findings from assurance audits

    Cadence: Annual

    AI Governance Readiness Checklist

    Minimum controls for audit-ready compliance across EU AI Act, CRA, U.S., and international standards — derived from cross-regime analysis

    Governance

    Executive AI accountability owner designated

    All regimes
    critical

    AI risk appetite statement approved by board

    ISO 42001 / EU AI Act
    critical

    Phased compliance calendar adopted (Art. 113 / CRA Art. 71)

    EU
    critical

    Quarterly board reporting on AI governance metrics

    ISO 38507
    high

    Inventory & Classification

    Unified AI system/model/vendor inventory established

    All regimes
    critical

    Risk classification applied (high-risk, GPAI, prohibited)

    EU AI Act
    critical

    Impact assessments completed for high-risk deployments

    EU / Colorado
    critical

    Data & Security

    Training data governance and provenance documented

    EU AI Act / GPAI
    high

    OWASP LLM Top 10 threat assessment completed

    Security best practice
    high

    MITRE ATLAS threat modeling integrated

    Security best practice
    medium

    Incident & Reporting

    Serious incident reporting pathway pre-staged

    EU AI Act / CRA
    critical

    Tabletop exercise conducted for AI incidents

    Best practice
    high

    CRA vulnerability reporting (24h/72h) workflow ready

    EU CRA
    high

    Assurance & Testing

    Management system backbone selected (e.g., ISO 42001)

    Global
    high

    Third-party testing or red-teaming program initiated

    AI Verify / NIST
    medium

    Model cards / system documentation standardized

    NIST AI RMF
    medium

    Procurement

    AI risk warranties in vendor contracts

    OMB M-25-22 / EU
    high

    Audit rights for AI system design and training data

    EU deployer obligations
    high

    Data use limitations (no retraining on buyer data)

    PIPL / GDPR
    medium

    Practical note: Large enterprises should target all critical + high items before EU AI Act general application (2026-08-02). SMEs can prioritize critical items and scale proportionally to risk exposure. This checklist maps to maturity Level 3 (Managed) in the readiness model.

    Procurement & Vendor Due Diligence

    Procurement is emerging as a primary governance lever — OMB M-25-22 explicitly governs AI acquisition in government, and EU AI Act deployer obligations create contractual demands on providers. Minimum procurement controls include:

    • AI risk warranties — contractual representation that the AI system has undergone risk assessment and meets applicable regime requirements
    • Audit rights — buyer's right to audit, inspect, or receive documentation about AI system design, training data provenance, and testing results
    • Incident notification — vendor must notify buyer of AI-related incidents within contractually specified timeframes
    • Data use limitations — explicit prohibitions on vendor reuse of buyer data for model training, consistent with data governance expectations across regimes

    Internal-Use vs Customer-Facing AI Governance

    DimensionInternal-Use AICustomer-Facing AI
    Risk classificationOften lower-risk (analytics, reporting)Frequently high-risk (decisions affecting people)
    Transparency obligationsInternal documentation, employee noticesExternal transparency notices, user disclosures
    Incident reportingInternal escalation pathwaysRegulatory reporting + customer notification
    Data governanceInternal data policies sufficientCustomer data protection, consent management
    Testing requirementsInternal validation acceptableIndependent testing/red teaming expected
    Liability exposureEmployment/discrimination lawProduct liability + regulatory fines

    Recommendations

    Board & Executive Committee

    1. Adopt a phased compliance calendar anchored to EU AI Act Article 113 and CRA Article 71, and require quarterly reporting against it
    2. Mandate a unified AI inventory (systems + models + vendors) as the governance "source of truth"
    3. Require an "evidence pack" for high-impact deployments: impact assessment, data governance notes, testing results, and incidents/near-misses log

    Compliance & Risk Function

    1. Select a management-system backbone (e.g., ISO/IEC 42001) and cross-map to jurisdiction obligations
    2. Implement AI-specific security controls using OWASP LLM Top 10 and MITRE ATLAS to update threat modeling
    3. Use externalized testing/assurance patterns (AI Verify-like checklists, benchmarks) to shift from narrative to measurable governance

    Regulator-Facing Preparedness

    1. Document conflict handling (e.g., U.S. federal vs Colorado; EU obligations vs vendor reluctance)
    2. Pre-stage incident reporting pathways for the AI Act and cybersecurity regimes; run tabletop exercises

    SME vs Large Enterprise Governance

    Large enterprises should target Level 4–5 maturity with dedicated AIMS programs and independent assurance. SMEs with lower-risk AI deployments can prioritize Level 2–3: documented policy, inventory, basic impact assessments, and incident awareness — scaled proportionally to risk exposure.

    Outlook & 2026–2027 Planning

    Near-Term Compliance Horizon (Highest Urgency)

    • EU AI Act general application approaches 2026-08-02 — "pilot governance" is no longer defensible for EU-exposed operators
    • EU CRA early application dates in 2026 (June, September) create parallel security readiness deadlines
    • U.S. continued fragmentation risk — federal preemption posture conflicts with state legislation; governance baseline must absorb state-level increments
    • AI assurance becoming "tool-ized" — regulators and procurement will increasingly expect testable evidence, not just policies
    • Agentic AI governance gap — autonomous AI systems with tool-use capabilities outpace existing regulatory definitions; Singapore's 2026-01 framework is the first dedicated response

    Quarterly Update Cadence

    • Q2 2026: EU guidance revisions, U.S. state legislative outcomes, Colorado/Utah trajectory
    • Q3 2026: EU AI Act general application (2026-08-02) operational impacts and enforcement signals
    • Q4 2026: CRA partial application milestones and AI incident reporting convergence

    Frequently Asked Questions

    When does the EU AI Act apply?

    The EU AI Act applies in phases: Chapters I–II (general provisions and prohibited practices) applied from 2025-02-02. Chapter V (GPAI obligations) applied from 2025-08-02. General application is 2026-08-02. Article 6(1) obligations and GPAI transition deadlines extend to 2027-08-02.

    What is 'risk readiness' for AI governance?

    AI governance risk readiness is an organization's demonstrated ability to identify, control, document, and continuously monitor the legal, ethical, security, and operational risks of AI systems — measured through governance artifacts (inventories, impact assessments, incident playbooks) rather than principles statements alone.

    How is U.S. federal AI policy changing?

    The U.S. underwent a documented policy reset: EO 14110 was rescinded on 2025-01-20 by EO 14148. OMB M-25-21 and M-25-22 (both 2025-04-03) now govern federal agency AI use and procurement. A National Policy Framework EO (2025-12-11) asserts federal preemption over state AI fragmentation.

    What evidence should boards require for AI governance?

    Boards should mandate: (1) a unified AI system/model/vendor inventory, (2) impact assessments for high-risk deployments, (3) data governance documentation, (4) testing/assurance results, (5) incident/near-miss logs, and (6) quarterly reporting against a phased compliance calendar. This 'evidence pack' supports multi-regime audit readiness.

    What is ISO/IEC 42001 and why does it matter?

    ISO/IEC 42001 (published December 2023) specifies requirements for an AI management system (AIMS). It matters because it provides an auditable governance structure compatible with continuous improvement — positioning AI governance as a systematic, certifiable capability rather than a one-time compliance exercise.

    How does China regulate AI?

    China uses sectoral, platform-focused binding controls: Algorithm Recommendation Provisions (effective 2022-03-01), Deep Synthesis Provisions (effective 2023-01-10), and Generative AI Interim Measures (effective 2023-08-15). These are complemented by PIPL (2021-11-01) and Data Security Law (2021-09-01).

    What is the Colorado AI law and when does it take effect?

    Colorado SB24-205 imposes algorithmic discrimination duties on developers and deployers of 'high-risk AI systems.' It was delayed by SB25B-004 and now takes effect on 2026-06-30. It is one of the most comprehensive U.S. state-level AI laws.

    What is the difference between the EU AI Act and the Cyber Resilience Act?

    The EU AI Act regulates AI systems and GPAI models directly, with risk classification and lifecycle obligations. The Cyber Resilience Act (CRA) regulates cybersecurity requirements for products with digital elements. They intersect because AI-enabled products must comply with both — creating parallel compliance timelines (CRA general application: 2027-12-11).

    What is a crosswalk between ISO 42001, NIST AI RMF, and AI Verify?

    ISO/IEC 42001 provides the management system backbone (Plan-Do-Check-Act), NIST AI RMF supplies the risk taxonomy (Govern-Map-Measure-Manage), and AI Verify delivers testable governance checks against 11 principles. Together they form a complementary three-layer assurance stack enabling multi-standard compliance.

    What are the EU AI Act penalties?

    The EU AI Act establishes tiered administrative fines: up to €35M or 7% of global annual turnover for prohibited AI practices, up to €15M or 3% for non-compliance with high-risk system obligations, and up to €7.5M or 1% for incorrect information to authorities. The lower of the two amounts applies to SMEs and startups.

    What is the minimum viable governance for SMEs vs large enterprises?

    Large enterprises should target Level 4–5 maturity with dedicated AIMS programs and independent assurance. SMEs with lower-risk AI deployments can prioritize Level 2–3: documented policy, inventory, basic impact assessments, and incident awareness — scaled proportionally to risk exposure. The EU AI Act applies lower penalty thresholds for SMEs.

    How should procurement contracts allocate AI risk?

    Minimum procurement controls: (1) AI risk warranties that the system meets applicable requirements, (2) audit rights for design and training data, (3) incident notification within contractual timeframes, (4) data use limitations preventing vendor retraining on buyer data. OMB M-25-22 explicitly governs AI acquisition in U.S. government.

    What is the UK Algorithmic Transparency Recording Standard?

    The ATRS is the UK Government Digital Service's mandatory standard (since 2025) requiring government departments to document algorithmic tools in public registers. It complements the UK's pro-innovation regulatory approach, using sector-specific regulators rather than a single AI-specific law.

    How do agentic AI systems change governance requirements?

    Agentic AI — systems with autonomous decision-making, tool use, and transaction capabilities — requires extended governance artifacts: tool-use authorization boundaries, transaction approval thresholds, human-in-the-loop escalation triggers, and multi-step reasoning audit trails. Singapore published the first dedicated agentic AI governance framework in January 2026.

    What are frontier AI developer safety frameworks?

    Frontier AI developers have published voluntary safety frameworks: OpenAI's Preparedness Framework, Anthropic's Responsible Scaling Policy (AI Safety Levels), Google DeepMind's Frontier Safety Framework, and Meta's AI Risk Assessment Framework. These are self-defined and lack independent external audit obligations. The EU AI Act GPAI provisions (Chapter V) are the first binding attempt at frontier model transparency.

    Methodology

    Research Approach

    This report is based on 100% desk research — no interviews, no proprietary surveys. 45 research questions were designed for reproducibility and periodic updates (quarterly cadence).

    85 curated sources form the evidence base, classified as Primary (official legal texts, regulator publications, standard body pages, institutional reports) or Secondary (analysis, reporting, academic commentary).

    The report intentionally adopts a multi-type classification: regulatory/governance review, comparative study, maturity model, cross-sector overview, and incident observatory — because AI governance and risk readiness is simultaneously jurisdiction-driven, standards-driven, and operationally implemented.

    Confidence Framework

    • High: Official legal texts, Federal Register entries, Official Journal publications, ISO edition dates
    • Medium: Translations, government web pages without consistent dates, survey-based metrics
    • Low: Pending legislation, political signals, projections

    Research Architecture

    Systematic desk research with full source traceability — no interviews, no proprietary surveys

    45
    Research Questions
    Designed for reproducibility
    85
    Curated Sources
    Primary & secondary classified
    20
    Key Indicators
    Machine-readable dataset
    9+
    Jurisdictions
    EU, US, CN, UK, SG, JP, AU, CA, BR
    High Confidence

    Official legal texts, Federal Register entries, Official Journal publications, ISO edition dates

    Medium Confidence

    Translations, government pages without dates, survey data, institutional analysis

    Low Confidence

    Pending legislation, political signals, projections, unverified commentary

    Source Quality Distribution

    85 sources classified by authority level — 68% primary sources (official legal and regulatory texts)

    Official legal texts
    28
    33%
    Government guidance
    18
    21%
    Standards bodies
    12
    14%
    Institutional reports
    10
    12%
    Academic/analysis
    9
    11%
    Industry surveys
    8
    9%

    Limitations

    • AI-assisted generation: This report was generated with AI assistance and reviewed by humans. Critical data points should be independently verified.
    • Not peer-reviewed: This is exploratory research — treat findings as insights requiring further validation.
    • Policy volatility: U.S. federal-state dynamics and pending legislation (Brazil, Canada) change rapidly; verify current status for critical decisions.
    • Publication date gaps: Some government web pages do not display consistent publish dates; treated as stable reference pages with access dates documented.
    • Bounded jurisdictions: Focus on EU, U.S., China, UK, Singapore, Japan, Australia, Canada, and Brazil — other jurisdictions (e.g., India, South Korea) are not covered in depth.
    • Enterprise adoption data: IBM/Morning Consult survey represents enterprise samples (>1,000 employees); SME adoption may differ significantly.

    Data Sources

    38 primary sources

    SourceAccessed
    EUR-Lex — EU AI Act (Regulation 2024/1689)2026-02-17
    European Commission — GPAI Guidelines2026-02-17
    U.S. Federal Register — EO 141482026-02-17
    U.S. Federal Register — EO 141102026-02-17
    OMB Memorandum M-25-212026-02-17
    OMB Memorandum M-25-222026-02-17
    OMB Memorandum M-24-10 (superseded)2026-02-17
    Colorado General Assembly — SB24-2052026-02-17
    Colorado General Assembly — SB25B-0042026-02-17
    Utah Legislature — HB2862026-02-17
    Council of Europe — AI Convention2026-02-17
    OECD — Recommendation on AI2026-02-17
    UNESCO — Ethics of AI Recommendation2026-02-17
    UNGA Resolution A/RES/78/2652026-02-17
    G7 Hiroshima Process Guiding Principles2026-02-17
    ISO/IEC 42001:20232026-02-17
    ISO/IEC 23894:20232026-02-17
    ISO/IEC 38507:20222026-02-17
    NIST AI RMF 1.02026-02-17
    NIST AI 600-1 (GenAI Profile)2026-02-17
    PDPC Singapore — AI Verify2026-02-17
    China — Generative AI Interim Measures2026-02-17
    China — Algorithm Recommendation Provisions2026-02-17
    UK — Algorithmic Transparency Recording Standard2026-02-17
    IBM Global AI Adoption Index2026-02-17
    EUR-Lex — Cyber Resilience Act (Regulation 2024/2847)2026-02-17
    OWASP — LLM Top 102026-02-17
    MITRE ATLAS2026-02-17
    ISO/IEC 22989:20222026-02-17
    UK — Pro-Innovation Approach White Paper2026-02-17
    UK ICO — AI and Data Protection Guidance2026-02-17
    China — Deep Synthesis Provisions2026-02-17
    Japan — AI Guidelines for Business v1.02026-02-17
    Australia — AI Ethics Principles2026-02-17
    Singapore — Model AI Governance Framework 2.02026-02-17
    China — PIPL (Personal Information Protection Law)2026-02-17
    EO 14179 — Removing Barriers to AI Innovation2026-02-17
    MLCommons AI Safety Benchmark2026-02-17

    Version History

    1.3
    2026-02-18Latest

    Added: Regulatory urgency heatmap (2026–2027), governance artifact lifecycle visual, compliance readiness checklist (19 controls), frontier AI developer safety dashboard, definition divergence table (cross-jurisdiction), incident response integration flowchart, research methodology dashboard, source quality breakdown. Expanded FAQ to 15 questions targeting high-intent long-tail queries. Added 11 new data sources (total: 39 curated). Enhanced LLMO extraction with expanded entity architecture.

    1.2
    2026-02-18

    Added: EU penalty structure visual, standards crosswalk dashboard (ISO 42001 ↔ NIST AI RMF ↔ AI Verify), board-level KPI dashboard, deadline countdown cards, procurement & vendor due diligence section, internal vs customer-facing AI governance comparison, incident reporting detail, 10-question FAQ section, full Report+Dataset+Organization+FAQPage JSON-LD schema graph.

    1.1
    2026-02-18

    Added: compliance timeline visualization, jurisdiction comparison chart, maturity model visual, adoption vs readiness gap dashboard, cross-regime convergence matrix, agentic AI governance chapter, 8-question FAQ section, evidence-based landscape map, penalty structures, SME vs enterprise guidance. Expanded data sources from 11 to 28. Added 16 keywords.

    1.0
    2026-02-17

    Initial publication — 85 sources, 20 scoreboard indicators, 5-level maturity model, 9+ jurisdictions mapped, control architecture checklist, and compliance timeline.

    Get in Touch!

    The lab usually responds within 24 hours.

    Follow the white rabbit

    Contact Alice
    to enter a world of
    Endless Possibilities

    Vi värdesätter din integritet

    Vi använder cookies för att ge dig bästa möjliga upplevelse och utveckla våra tjänster.