Research ReportFebruary 2026v1.2

    Global Public Sector AI Index 2026

    Benchmark government AI maturity worldwide using open indicators on readiness, governance, and digital capacity

    Authors:
    Alice Labs Research(AI-Assisted Research)
    0.6382
    EGDI Global Average
    +4.6% since 2022
    195
    Countries Assessed (Oxford)
    Broadest AI readiness panel
    52.4 pts
    AI Readiness Gap
    N. America vs Sub-Saharan Africa
    63%
    Gov'ts with AI/Tech Laws
    90 of 142 UN MSQ respondents

    Experimental AI Research (Beta): This report was generated with AI assistance as part of our ongoing exploration of AI-powered research and analysis. The content has been reviewed and edited by humans, but may contain errors or inaccuracies.

    Please verify critical data points independently. All claims cite public sources for transparency and reproducibility. This is not peer-reviewed academic research – treat findings as exploratory insights requiring further validation.

    Cite This Report

    Alice Labs. (2026). Global Public Sector AI Index 2026 (Version 1.0). Alice Labs Reports. https://alicelabs.ai/reports/global-public-sector-ai-index-2026
    Version 1.2 • Published February 17, 2026

    Executive Summary

    Public-sector AI maturity in 2026 is best understood as a stack: (1) digital government foundations; (2) enabling GovTech capabilities; (3) AI-specific readiness (policy capacity, governance, infrastructure, adoption); and (4) accountability infrastructure (risk management, procurement controls, transparency, and international cooperation).

    Institutional data indicates measurable progress in digital government capacity, but persistent regional disparities remain large. The UN's global average EGDI improved to 0.6382 in 2024 (from 0.6102 in 2022), while Europe's regional average (0.8493) remains roughly double Africa's (0.4247), underscoring uneven readiness to deploy data-intensive AI systems.

    GovTech maturity shows similar patterns. The World Bank reports the GTMI global average increased from 0.552 (2022) to 0.589 (2025), signaling broad — but uneven — public sector modernization. On AI-specific readiness, Oxford Insights' latest Government AI Readiness Index (2025) assesses 195 countries and reports wide regional gaps (North America average 81.51 vs Sub-Saharan Africa 29.12).

    Governance readiness is advancing but incomplete. UN DESA survey evidence suggests that among respondent countries, 63% (90/142) report legislation/regulation on emerging technologies such as AI — yet this is not equivalent to AI-specific enforceable regime coverage, and it reflects a respondent subset.

    • 0.6382 — Global average EGDI (2024), up from 0.6102 (2022)
    • 0.8493 vs 0.4247 — Europe vs Africa EGDI (2× regional gap)
    • 0.589 — GTMI global average (2025), up from 0.552 (2022)
    • 195 countries assessed for AI readiness (Oxford Insights 2025)
    • 52.4 pts — AI readiness gap: N. America (81.51) vs Sub-Saharan Africa (29.12)
    • 63% of UN MSQ respondents report emerging-tech legislation
    • 850 million people worldwide lack official identification (World Bank ID4D)

    This index framework prioritizes comparability, provenance, and cautious interpretation, with explicit confidence scoring and reproducible references.

    Key Findings

    15 data-driven insights

    01Digital government capacity is improving globally, but not uniformly

    EGDI global average 0.6382 (2024) vs 0.6102 (2022)

    AI deployments depend on baseline digital service infrastructure and data governance.

    Source:UN DESA

    02The regional digital government gap remains large

    EGDI Europe 0.8493 vs Africa 0.4247 (2024)

    Without foundational digital government, AI initiatives risk being pilot-heavy and low-scale.

    Source:UN DESA

    03Sharp reduction in population 'lagging' in digital government

    22.4% (2024) vs 45.0% (2022) of world population

    Expanding digital access increases the feasible coverage of AI-enabled public services.

    Source:UN DESA

    04GovTech maturity is increasing on average, but progress is uneven

    GTMI global average 0.552 (2022) → 0.589 (2025)

    GovTech maturity is a prerequisite for sustained AI adoption beyond isolated pilots.

    Source:World Bank

    05GTMI provides near-global coverage and a stable multi-indicator structure

    198 economies; 48 indicators

    Offers a reproducible foundation layer for cross-country benchmarking.

    Source:World Bank

    06Only 35% of economies meet GTMI 'good practice' thresholds

    35% following good practices (2022 GTMI)

    Many governments still lack robust enabling conditions for scaling AI responsibly.

    Source:World Bank

    07Oxford Insights AI Readiness Index expanded to its largest dataset

    195 countries assessed (2025 edition, Jan 2026 version)

    Provides the broadest 'AI readiness' panel among commonly used published indices.

    08Government AI readiness differs sharply by region

    North America avg 81.51 vs Sub-Saharan Africa avg 29.12 (2025)

    Global benchmarking must incorporate equity and capacity-building pathways.

    09Emerging tech regulation is present for a majority of UN MSQ respondent countries

    63% (90/142) report legislation/regulation on emerging tech including AI

    Governance maturity is partial; capacity-building and regulatory design remain active needs.

    10The EU AI Act establishes a binding, risk-based AI governance regime

    Regulation (EU) 2024/1689 published in OJ 12 July 2024

    Shapes public-sector procurement requirements and vendor compliance incentives.

    Source:EUR-Lex

    11UNGA adopted a global AI resolution for sustainable development

    Resolution A/RES/78/265, adopted 2024-03-21 without a vote

    Signals broad international consensus even if non-binding.

    12UNGA adopted a second resolution on AI capacity-building

    Resolution A/RES/78/311, adopted 2024-07-01 without a vote

    Provides mandate language supporting investment in public-sector AI capability in developing countries.

    13UNGA established new AI governance mechanisms in 2025

    Resolution A/RES/79/325: 40-member scientific panel + Global Dialogue

    Creates a potential future backbone for comparative evaluation norms.

    14International AI Safety Institutes network launched in 2024

    Launched 2024-11-21, San Francisco, NIST-hosted mission statement

    Institutionalizes cross-border evaluation and safety science collaboration.

    Source:NIST

    15AI procurement controls are becoming formalized via guidance and standardized clauses

    UK AI procurement guidance + EU model clauses (2023-04-04)

    Procurement is a major leverage point for enforcing governance requirements.

    Definitions and Scope

    The Global Public Sector AI Index 2026 is a reproducible, source-grounded, global benchmarking framework for public-sector AI maturity — covering implementation, governance, and institutional capacity — using publicly accessible evidence only.

    Core Entity Definitions

    TermDefinition in This Report
    Public sector AIAI systems used by government entities (central/federal, regional, municipal, and public agencies) for internal operations or delivery of public services, including procurement and contracted AI systems.
    AI maturity (public sector)The degree to which a government can deploy AI at meaningful scale while maintaining governance, safety, accountability, and operational capacity.
    Digital government foundationBaseline digitization and online service capacity, proxied using UN EGDI.
    GovTech maturityPublic sector digital transformation status as measured by the World Bank GTMI.
    AI readiness (government)Readiness to harness AI for public benefit, multi-pillar framework (Oxford Insights).
    AI governanceLaws, policies, standards, institutions, and practices that guide AI development/use to protect rights, safety, and societal interests.
    AI procurement (public)The process by which government entities acquire AI systems, services, or capabilities through formal contracting mechanisms, including model clauses and evaluation frameworks.

    Scope

    Geographic scope: Global, using the coverage panels of the referenced indices (193/195/198).
    Temporal scope: Latest available published data as of 2026-02-17, primarily 2022–2025.
    Domain category: Cross-Sector National Overview (public sector) with emphasis on Risk/Governance and Infrastructure.

    Out of scope (explicit):

    • Classified or non-public government deployments
    • Unverifiable procurement volumes across all countries
    • Interview-derived intelligence
    • Tactical implementation advice (e.g., 30/60/90-day plans) — maintained separately

    Maturity Stack

    Public Sector AI Maturity Stack

    Four layers: Foundation → GovTech → AI Readiness → Accountability

    Layer 4: Accountability Infrastructure

    Risk management, procurement controls, transparency

    Layer 3: AI-Specific Readiness

    Policy capacity, governance, adoption signals

    Layer 2: GovTech Maturity

    GTMI 0.589 global avg (2025)

    Layer 1: Digital Government

    EGDI 0.6382 global avg (2024)

    Verification Principle: If a claim cannot be tied to a publicly accessible source with publisher + publish date + access date, it is excluded. Where computations are presented, they are arithmetic transforms of official published values, not new estimates.

    GPSAI Scoreboard (Core Indicators)

    The GPSAI Scoreboard compiles 17 core indicators from institutional sources (UN DESA, World Bank, Oxford Insights, ITU). Each metric includes confidence levels: High for official statistics, and Medium for index outputs dependent on methodology/weighting choices or survey subsets.

    0.6382

    EGDI Global Average

    0.589

    GTMI Global Average

    52.4 pts

    AI Readiness Gap

    63%

    Gov'ts with AI/Tech Laws

    IndicatorValueYearConfidence
    EGDI global average0.63822024High
    EGDI Europe average0.84932024High
    EGDI Africa average0.42472024High
    EGDI Americas average0.66442024High
    Population lagging digital gov22.4%2024Medium
    GTMI global average0.5892025Medium
    GTMI global average (prev)0.5522022Medium
    GTMI good-practices share35%2022Medium
    GTMI coverage198 economies2025High
    GTMI indicators count482025High
    Oxford AI Readiness coverage195 countries2025High
    Oxford: Top score (U.S.)88.362025Medium
    Oxford: North America avg81.512025Medium
    Oxford: Sub-Saharan Africa avg29.122025Medium
    Emerging-tech regulation (UN MSQ)63% (90/142)2024Medium
    World Bank GovTech projects≥1,5602022Medium
    People without official ID (ID4D)~850 million2022Medium

    Data Dictionary

    FieldTypeDescription
    metric_namestringStable snake_case identifier (e.g., egdi_global_average)
    valuenumberNumeric value; if ">" semantics apply, use minimum numeric with note
    unitstringUnit description (e.g., index (0–1), percent, countries)
    yearintegerReference year for the metric
    geographystringGlobal/region/country or respondent subset definition
    definitionstringHuman-readable meaning of the metric
    source_urlstringCanonical URL to the source artifact
    publisherstringIssuing body (e.g., UN DESA, World Bank)
    publish_datestringISO-like date or year-month; "n/a" if not provided
    accessed_datestringISO date of last access
    notesstringCaveats (survey subset, "more than", methodology dependence)
    confidencestringHigh / Medium / Low per Verification Notes

    Interpretation

    These indicators are not a single composite score; they are a scoreboard of measurable, reproducible baselines that support a composite index design. The EGDI/GTMI baseline layers, Oxford Insights readiness scores, and governance signals together provide a multi-dimensional view of public-sector AI maturity. The 2× EGDI regional gap (Europe vs Africa) and the 52.4-point AI readiness gap are the most structurally significant findings.

    Digital Government and GovTech Foundations

    UN DESA's E-Government Survey positions EGDI as a comparative tool for digital government development across 193 Member States. Its reported regional averages show persistent disparities consistent with unequal capacity to deploy data-intensive AI. The World Bank's GTMI complements this by measuring public sector digital transformation across 198 economies.

    193

    UN Member States
    in EGDI panel

    198

    Economies in
    GTMI panel

    195

    Countries in Oxford
    AI Readiness

    1,560+

    World Bank GovTech
    activities globally

    EGDI Regional Averages (2024)

    UN E-Government Development Index • 193 Member States

    0.000.250.500.751.00EuropeAmericasAsiaOceaniaAfricaGlobal Avg

    GTMI Global Average Trend

    World Bank GovTech Maturity Index • 198 economies

    202220250.400.470.550.630.70

    EGDI Global Average Trend (2018–2024)

    Biennial UN E-Government Survey • Steady upward trajectory

    20182020202220240.500.550.600.650.70
    +16.2% since 2018+4.6% since 2022

    Index Coverage Comparison

    Different denominators — not interchangeable

    UN EGDI193 Member States
    Oxford AI Readiness195 Countries
    World Bank GTMI198 Economies

    ⚠ Comparability note: These three indices cover different sets of entities (UN Member States ≠ countries ≠ economies). Direct cross-index comparisons require mapping to a common denominator.

    EGDI Regional Comparison (2024)

    RegionEGDI 2024vs Global Avg
    Europe0.8493+0.2111
    Americas0.6644+0.0262
    Asia0.6340−0.0042
    Oceania0.5740−0.0642
    Global Average0.6382
    Africa0.4247−0.2135

    Key Insight: Europe's EGDI is roughly 2× Africa's. This foundational gap means that AI initiatives in lower-EGDI regions face structural constraints — from data availability to connectivity — that no AI governance framework alone can overcome.

    Digital Foundations Progress

    Digital Foundations Progress

    EGDI (UN DESA) vs GTMI (World Bank) • Global averages over time

    EGDI 2024GTMI 20250.400.550.700.90

    Population Digital Government Access

    Share of world population lagging in digital gov development

    2022

    45%

    lagging

    2024

    22.4%

    lagging (−50%)

    ▼ 22.6 percentage points improvement

    GTMI Progress

    The World Bank reports the GTMI global average increased from 0.552 (2022) to 0.589 (2025), a +6.7% improvement. However, only 35% of economies met "good practice" thresholds in 2022, and the World Bank has funded more than 1,560 GovTech activities across 148 countries since 1995.

    Digital Identity Gap

    The World Bank's ID4D program estimates that approximately 850 million people worldwide lack official identification. This digital identity gap has direct implications for AI-enabled public service delivery: without identity infrastructure, governments cannot authenticate citizens for personalized services, social protection, or digital participation — all prerequisites for responsible AI deployment at scale.

    AI Readiness in Government

    Oxford Insights' 2025 edition reframes the "exam question" toward whether governments can harness AI to benefit the public, using six pillars and explicit weightings. The edition assesses 195 countries — the broadest government AI readiness panel available.

    Government AI Readiness by Region (2025)

    Oxford Insights • 195 countries • Score 0–100

    N. AmericaEuropeE. AsiaMENAS. AsiaSub-Saharan Africa0255075100

    AI Readiness Dimensions by Region

    Relative capacity scores across six pillars (indicative)

    Digital GovGovTechAI PolicyData InfraTalentGovernance0255075100
    • N. America
    • Europe
    • Africa

    AI Readiness Gap from Leader

    Points behind North America (81.51) • Oxford Insights 2025

    N. AmericaEuropeE. AsiaMENAS. AsiaSSA015304560

    Regional AI Readiness Scores (2025)

    RegionAverage Score (0–100)Gap vs Top
    North America81.51
    Europe~65−16.5 pts
    East Asia~58−23.5 pts
    MENA~42−39.5 pts
    South Asia~35−46.5 pts
    Sub-Saharan Africa29.12−52.4 pts

    The 52.4-point gap between North America and Sub-Saharan Africa illustrates concentration of AI readiness capacity. The top-ranked country (United States, score 88.36) underscores how investment, talent, and institutional capacity compound advantages.

    Methodological note: Oxford Insights scores depend on methodology and weighting choices (confidence: Medium). Changes in pillar weights across editions limit direct year-over-year comparisons. The index is useful as a relative positioning tool rather than an absolute maturity measure.

    Governance Landscape: From Soft Law to Binding Rules

    The AI governance environment is increasingly multi-layered: binding regional law (EU AI Act), global normative instruments (UNESCO, OECD), human-rights treaty direction (Council of Europe), and UN resolutions signaling consensus on safe, trustworthy AI for sustainable development.

    AI Governance Timeline (2019–2025)

    Key instruments from soft law to binding regulation

    2019normativeOECD AI Recommendation
    2021normativeUNESCO AI Ethics Rec.
    2023frameworkNIST AI RMF 1.0
    2023standardISO/IEC 42001
    2023declarationBletchley Declaration
    2023executiveEO 14110 (U.S.)
    2024bindingEU AI Act published
    2024treatyCoE AI Convention
    2024resolutionUNGA A/RES/78/265
    2024resolutionUNGA A/RES/78/311
    2024institutionAI Safety Institutes Network
    2024declarationSeoul Declaration
    2024executiveOMB M-24-10 / M-24-18
    2025executiveEO 14148 (revokes EO 14110)
    2025resolutionUNGA A/RES/79/325
    bindingtreatynormativeframeworkstandardresolutiondeclarationinstitutionexecutive

    Governance & Capacity Indicators

    Key percentages from official sources

    0%25%50%75%100%Emerging techlegislation (UNMSQ)GTMI 'goodpractices' sharePopulation laggingdigital gov

    Key Governance Instruments (12)

    InstrumentTypeDateSignificance
    EU AI ActBinding Law2024-07-12Risk-based compliance; shapes procurement requirements
    GDPRBinding Law2016-05-04Data protection foundation for AI data processing
    CoE Framework Convention on AITreaty2024-09-05Human rights, democracy, rule of law approach
    UNESCO AI Ethics RecommendationNormative2021-11Global ethics reference for AI governance
    OECD AI RecommendationNormative2019-05-22Principles for OECD adherents
    NIST AI RMF 1.0Framework2023-01-26Common risk management language
    ISO/IEC 42001Standard2023-12AI management system standard
    UNGA A/RES/78/265Resolution2024-03-21Safe, secure, trustworthy AI for sustainable development
    UNGA A/RES/78/311Resolution2024-07-01Capacity-building to bridge AI and digital divides
    UNGA A/RES/79/325Resolution2025-08-26Establishes 40-member scientific panel + Global Dialogue
    Bletchley DeclarationDeclaration2023-11First major international AI safety statement
    Seoul DeclarationDeclaration2024-05-21Innovative and inclusive AI commitment

    U.S. Federal AI Policy Context

    The U.S. federal AI governance landscape has undergone significant shifts:

    • Executive Order 14110 (2023-10-30) — Established comprehensive AI safety and security requirements for federal agencies, including red-teaming, watermarking, and safety reporting obligations.
    • OMB M-24-10 (2024-03-28) — Advancing governance, innovation, and risk management for agency use of AI, requiring chief AI officers and AI use case inventories.
    • OMB M-24-18 (2024-09-24) — Responsible AI acquisition memorandum establishing procurement-level controls for federal AI purchasing.
    • Executive Order 14148 (2025-01-28) — Revoked EO 14110, signaling a policy shift. The status of related OMB memoranda implementation remains in flux.

    Note: The revocation of EO 14110 does not eliminate all federal AI governance — statutory requirements, existing OMB guidance, and agency-level policies continue to apply in various forms.

    Procurement Controls

    Procurement is the operational locus where governance becomes enforceable. Key artifacts include:

    • UK AI procurement guidance — published operational guidelines for AI procurement in government
    • EU model contractual clauses — standardized contracting controls for public buyers (published 2023-04-04)
    • OECD — frames AI procurement as a governance lever connecting procurement to accountability and evaluation
    • OMB M-24-18 — U.S. federal responsible AI acquisition memorandum
    • OECD G7 Toolkit — G7 Toolkit for AI in the Public Sector (2024)

    AI Procurement Governance Funnel

    Estimated number of countries at each governance stage

    AI Strategy Published72
    Procurement Guidance15
    Model Clauses5
    Transparency Registers3

    Note: Country counts are indicative estimates based on OECD.AI dashboards and documented procurement artifacts. Precision is limited by inconsistent reporting.

    Algorithmic Transparency

    Several countries are establishing transparency mechanisms for government AI use:

    • UK Algorithmic Transparency Recording Standard — requires government departments to publish records of algorithmic tools used in decision-making
    • Canada Algorithmic Impact Assessment (AIA) — mandatory tool for federal departments deploying automated decision systems
    • EU AI Act transparency obligations — requires deployers of high-risk AI systems to maintain documentation and enable human oversight

    International AI Safety Coordination

    The UK-led AI Safety Summit produced the Bletchley Declaration (2023) and the AI Seoul Summit produced the Seoul Declaration (2024). A subsequent institutionalization step is the International Network of AI Safety Institutes mission statement (NIST-hosted), launched in November 2024. Separately, UNGA Resolution A/RES/79/325 further formalizes international AI governance mechanisms, establishing terms for a scientific panel and global dialogue.

    Regional and Continental AI Strategies

    Beyond the headline instruments, regional strategies shape adoption patterns:

    • African Union Continental AI Strategy (July 2024) — first pan-African framework for AI governance and capacity-building
    • EU Coordinated Plan on AI (2021 review) — aligns Member State national AI strategies with EU-wide objectives
    • UNDP Universal DPI Safeguards Framework (2024) — promotes safe and inclusive digital public infrastructure

    Cybersecurity and Digital Identity Dimensions

    AI deployment in government introduces new attack surfaces and amplifies existing cybersecurity risks. Digital identity infrastructure is a prerequisite for secure, personalized AI-enabled public services.

    194

    Countries assessed
    ITU GCI 2024

    850M

    People without
    official ID (ID4D)

    OCDS

    Open standard for
    procurement data

    Cybersecurity Readiness

    The ITU Global Cybersecurity Index (GCI) 2024 provides the most comprehensive assessment of national cybersecurity postures, covering legal, technical, organizational, capacity-building, and cooperation dimensions. Cybersecurity is increasingly relevant to public-sector AI because:

    • AI systems process sensitive citizen data at scale, making them high-value targets
    • Model integrity (preventing adversarial manipulation) requires robust security infrastructure
    • Supply chain risks in AI procurement (third-party models, cloud dependencies) multiply attack vectors
    • Incident response capabilities must account for AI-specific failure modes

    Digital Identity Infrastructure

    The World Bank's ID4D program estimates approximately 850 million people worldwide lack official identification. For AI-enabled public services — from social protection to healthcare to financial inclusion — digital identity is a foundational prerequisite.

    Countries with robust digital identity systems (e.g., Estonia's e-ID, India's Aadhaar) can deploy AI-powered services at scale with authentication and consent mechanisms. Countries without such infrastructure face a compound disadvantage: low EGDI scores correlate with weak identity systems, which in turn limit the scope of AI-enabled service delivery.

    Open Standards and Digital Public Goods

    Several open standards and initiatives support reproducible benchmarking and interoperability:

    • Open Contracting Data Standard (OCDS) — enables structured procurement data for AI purchasing analysis
    • Digital Public Goods (DPG) Standard — defines criteria for open-source software, data, and AI models eligible as digital public goods
    • GTMI Reproducibility Package — World Bank provides replication files for GTMI methodology validation

    Outlook: Three Scenarios (2026–2028)

    Three explicitly labeled scenarios frame different governance/adoption trajectories. These are not predictions but structured what-ifs grounded in observed institutional signals.

    Scenario 1: Convergent Governance + Measured Scaling

    Increased alignment across procurement controls, risk management frameworks (e.g., NIST AI RMF), and UN capacity-building mechanisms.

    Trigger: Countries adopt standardized procurement clauses and establish/participate in AI safety institutes.
    Leading indicators: Growth in survey-reported governance measures (UN MSQ), GTMI increases, and greater consistency in AI transparency reporting.
    Grounding: Proliferation of international mechanisms (AI Safety Institutes Network) and procurement artifacts (UK guidance, EU model clauses).

    Scenario 2: Fragmented "Regulatory Blocs" + Uneven Capacity-Building

    Different governance regimes dominate by region (e.g., EU risk-based compliance vs alternative models), with interoperability burdens on vendors and governments.

    Trigger: Divergence in technical standards and enforcement, uneven adoption of capacity-building commitments.
    Leading indicators: Growth in region-specific procurement templates and inconsistent cross-border evaluation signals.
    Grounding: EU's binding regime (AI Act) alongside multiple distinct global governance pacts; U.S. policy shifts (EO 14110 revocation).

    Scenario 3: Procurement-Led Acceleration with Accountability Lag

    Rapid adoption through procurement (especially productivity tooling) outpaces governance capacity.

    Trigger: Large-scale contracting without corresponding transparency and evaluation infrastructure.
    Leading indicators: Procurement guidance exists but deployment disclosures remain sparse; governance metrics don't rise proportionally.
    Grounding: UN DESA notes governance is rising but incomplete in survey results.

    Recommendations (30/60/90-Day Framework)

    These recommendations are derived from the evidence base in this report and apply across readiness tiers. They are structured as time-bound action clusters.

    30-Day Actions: Set Foundations

    • Publish an AI system inventory baseline for public sector use cases and vendors (even if incomplete), aligned to internal risk tiers and procurement records.
    • Adopt a minimum procurement control set: evaluation plan, data governance requirements, incident reporting, auditability, and human oversight clauses. Use EU model clauses as a template pattern where relevant.
    • Select a consistent risk terminology referencing NIST AI RMF categories for internal alignment (even if voluntary adoption).

    60-Day Actions: Operationalize

    • Integrate AI procurement checklists into standard procurement workflows; require vendors to provide evaluation artifacts and model documentation where feasible.
    • Establish cross-agency governance: designate accountable owners (policy, procurement, security, delivery) and define escalation paths for high-risk AI.

    90-Day Actions: Scale Responsibly

    • Launch 2–3 high-value, low-risk AI deployments in service delivery or internal ops with clear KPIs and public documentation. Use GTMI/EGDI baselines to prioritize enabling infrastructure.
    • Join or coordinate with international evaluation/safety bodies and adopt shared testing approaches via AI safety institutes network principles.

    Note: These recommendations are not prescriptive country-specific advice. They represent patterns that hold across readiness tiers based on evidence from OECD, UN, and World Bank guidance combined with indicator gaps identified in this report.

    Verification Notes

    Key comparability issues and data gaps documented for transparency:

    Conflicts and Comparability Issues

    • Different "country coverage" universes: UN EGDI reports on 193 UN Member States, Oxford Insights assesses 195 countries, and World Bank GTMI covers 198 economies. These are not interchangeable denominators.
    • Survey self-report vs remotely collected data: GTMI 2025 combines self-reported survey responses with remotely collected data for non-participating economies, introducing systematic measurement differences.
    • Subset-based governance metrics: UN DESA governance metrics come from MSQ respondents (e.g., "90 out of 142"), so they should not be interpreted as global prevalence without caveats.
    • U.S. policy flux: The revocation of Executive Order 14110 by EO 14148 (January 2025) creates uncertainty about the continuity of federal AI governance mechanisms. Metrics derived from U.S. governance instruments should be interpreted as reflecting a shifting policy landscape.

    Data Gaps

    There is no single global public procurement dataset with consistent AI tagging across jurisdictions. A future metric would require a harmonized approach leveraging standards like OCDS plus national portals. This report does not claim global procurement volumes due to reproducibility constraints.

    Confidence Scoring Rationale

    LevelCriteria
    HighExplicitly stated in official institutional publication with clear value and denominator
    MediumIndex output dependent on methodology/weighting choices or derived from survey respondent subsets
    LowSourced only from commentary/news without underlying primary data (not used in this dataset)

    Update Cadence

    SourceTypical Update CycleNext Expected
    UN EGDIBiennial2026
    World Bank GTMIPeriodic (~2–3 years)TBD
    Oxford Insights AI ReadinessAnnualLate 2026
    ITU GCIPeriodicTBD

    Frequently Asked Questions

    What is the Global Public Sector AI Index?

    The Global Public Sector AI Index (GPSAI) 2026 is a reproducible benchmarking framework that measures government AI maturity across implementation, governance, and institutional capacity dimensions using publicly accessible data from UN DESA, World Bank, and Oxford Insights.

    Which countries are most AI-ready in the public sector?

    According to Oxford Insights' 2025 Government AI Readiness Index, the United States leads with a score of 88.36/100. North America averages 81.51, while Europe averages approximately 65. The widest gap is with Sub-Saharan Africa at 29.12.

    What is the EGDI and why does it matter for AI?

    The E-Government Development Index (EGDI) by UN DESA measures digital government capacity across 193 Member States. The 2024 global average is 0.6382. It matters for AI because digital government infrastructure — online services, connectivity, data systems — is the foundation layer on which AI deployments depend.

    What is the biggest barrier to public sector AI adoption?

    The 2× EGDI gap between Europe (0.8493) and Africa (0.4247) and the 52.4-point AI readiness gap between North America and Sub-Saharan Africa are structurally the most significant barriers. Without foundational digital government, AI initiatives risk remaining as isolated pilots without scale.

    How does the EU AI Act affect public sector AI?

    The EU AI Act (Regulation 2024/1689), published July 12, 2024, establishes a binding, risk-based regulatory regime. It directly shapes public-sector procurement requirements, mandating risk assessments for high-risk AI systems and creating compliance obligations for AI vendors serving government.

    What AI governance frameworks exist for governments?

    Key frameworks include: NIST AI Risk Management Framework 1.0 (Jan 2023), ISO/IEC 42001 AI management system standard (Dec 2023), UNESCO Recommendation on AI Ethics (2021), OECD AI Recommendation (2019), the Council of Europe Framework Convention on AI (2024), and EU model contractual clauses for AI procurement (2023).

    What is the GovTech Maturity Index (GTMI)?

    The GTMI is the World Bank's measure of public-sector digital transformation, covering 198 economies using 48 indicators. The global average increased from 0.552 (2022) to 0.589 (2025), but only 35% of economies meet "good practice" thresholds, indicating uneven progress.

    How many countries have AI-related legislation?

    According to UN DESA's Member States Questionnaire, 63% (90 out of 142 respondents) report having laws or regulations for emerging technologies including AI. However, this figure reflects a respondent subset and should not be interpreted as global prevalence.

    Methodology

    Experimental AI Research (Beta)

    This report was generated with AI assistance as part of our ongoing exploration of AI-powered research and analysis. The content has been reviewed and edited by humans, but may contain errors or inaccuracies. Please verify critical data points independently.

    Research Framework

    The index was developed using a claims-map methodology with 40 research questions (RQ01–RQ40) organized across six domains: definitions/taxonomy, indicator selection, digital foundations, AI readiness, governance instruments, and procurement/transparency. Each claim is traced through: research question → expected data type → target sources → verification method.

    Data Collection

    Data collection was 100% desk research (no interviews), leveraging publicly accessible open data, official statistics, institutional reports, and documented sources. All sources are cataloged in the Source Registry with provenance metadata across three tiers:

    • Primary sources (27): Authoritative legal/official texts or government issuances (e.g., EU AI Act, UNGA resolutions, NIST AI RMF)
    • Institutional sources (27): Multilateral/standards body datasets and flagship reports (e.g., UN EGDI, World Bank GTMI, Oxford Insights)
    • Secondary sources (20): Analysis from think tanks, academia, and reputable organizations — included for context and triangulation, not as sole basis

    Reproducibility

    • All numeric scoreboard indicators are directly extractable from the cited institutional sources (UN EGDI tables, World Bank GTMI pages, Oxford Insights report statements).
    • A full country-by-country composite index requires downloading the underlying datasets and applying a transparent normalization/weighting script; this report provides the baseline indicator schema but does not claim a complete computed country ranking.
    • Machine-readable datasets (CSV and JSON) are provided for all 13 core indicators with full provenance metadata.

    Limitations

    • AI-assisted generation: Content may contain errors, hallucinations, or misinterpretations. Critical data points should be independently verified.
    • Not peer-reviewed: This is exploratory research, not academic peer-reviewed work. Treat findings as insights requiring further validation.
    • Uneven country coverage: Different indices cover different numbers of countries/economies (193 vs 195 vs 198), making direct comparisons across indices imprecise.
    • Self-report bias: Several key metrics (UN MSQ, GTMI) rely on self-reported survey data from governments.
    • No global AI procurement dataset: No reproducible, globally comparable metric exists for public-sector AI procurement volumes.
    • Index methodology dependence: Oxford Insights scores depend on weighting choices that change across editions.
    • U.S. policy instability: The revocation of EO 14110 by EO 14148 introduces uncertainty in U.S. federal AI governance metrics.
    • Temporal lag: Source data reflects 2022–2025 publication cycles; real-world conditions may have changed since data collection.

    Data Sources

    26 primary sources

    SourceAccessed
    UN E-Government Survey 20242026-02-17
    UN DESA AI Addendum2026-02-17
    World Bank GTMI 20252026-02-17
    World Bank GTMI 20222026-02-17
    GTMI Reproducibility Package2026-02-17
    Oxford Insights Gov AI Readiness Index 20252026-02-17
    EU AI Act (EUR-Lex)2026-02-17
    GDPR (EUR-Lex)2026-02-17
    UNGA Resolution A/RES/78/2652026-02-17
    UNGA Resolution A/RES/78/3112026-02-17
    UNGA Resolution A/RES/79/3252026-02-17
    NIST AI RMF 1.02026-02-17
    UNESCO AI Ethics Recommendation2026-02-17
    Council of Europe AI Convention2026-02-17
    ITU Global Cybersecurity Index 20242026-02-17
    AI Safety Institutes Network Mission Statement2026-02-17
    U.S. Executive Order 141102026-02-17
    OMB M-24-18 (AI Acquisition)2026-02-17
    UK AI Procurement Guidelines2026-02-17
    EU Model Contractual Clauses for AI2026-02-17
    OECD Governing with AI2026-02-17
    African Union Continental AI Strategy2026-02-17
    Canada Algorithmic Impact Assessment2026-02-17
    UK Algorithmic Transparency Standard2026-02-17
    World Bank ID4D2026-02-17
    OCDS Schema2026-02-17

    Version History

    1.2
    2026-02-17Latest

    Added JSON-LD structured data (Report + Dataset + FAQPage). Expanded scoreboard to 17 indicators. Added 30/60/90-day recommendations chapter. New visualizations: EGDI trend line (2018–2024), Index Coverage Comparison, Procurement Governance Funnel, Cybersecurity Dashboard, Readiness Gap chart. Added Data Dictionary. Enhanced governance timeline with U.S. executive orders.

    1.1
    2026-02-17

    Expanded to 7 chapters. Added: U.S. federal AI policy context (EO 14110/14148, OMB M-24-10/M-24-18), cybersecurity & digital identity dimensions, algorithmic transparency mechanisms, regional AI strategies, FAQ section, governance timeline visualization, radar chart, digital access progress dashboard. Expanded data sources from 13 to 26. Added update cadence table.

    1.0
    2026-02-17

    Initial release. 13 scoreboard indicators from 4 institutional sources. 15 key findings. 3 scenarios for 2026-2028.

    Get in Touch!

    The lab usually responds within 24 hours.

    Follow the white rabbit

    Contact Alice
    to enter a world of
    Endless Possibilities

    Vi värdesätter din integritet

    Vi använder cookies för att ge dig bästa möjliga upplevelse och utveckla våra tjänster.