Executive Summary
Three regulatory regimes now govern AI globally: the EU AI Act (in force August 2024, high-risk obligations enforceable August 2026), China’s Generative AI Interim Measures (effective August 2023), and the United States’ deregulatory posture under President Trump’s Executive Order 14179 (January 2025), which revoked the Biden administration’s Executive Order 14110. These three frameworks are not merely different in emphasis — they are structurally incompatible. A foundation model trained in the United States, fine-tuned with data from multiple jurisdictions, and deployed via API to clients in the EU and China must simultaneously satisfy mandatory risk-tiering with extraterritorial reach, state-aligned content governance with data localisation, and a framework predicated on the absence of federal AI-specific regulation. This article argues that no national or regional framework can resolve the resulting compliance asymmetries, and proposes a realistic sequencing toward multilateral AI governance architecture anchored in Geneva’s institutional ecosystem.
Assessment: The State of Regulatory Fragmentation
The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and will impose mandatory conformity assessments for high-risk AI systems from August 2026. It applies to any provider placing an AI system on the EU market, regardless of where the provider is established — an extraterritorial reach modelled on the General Data Protection Regulation. China’s approach operates through sector-specific measures administered by the Cyberspace Administration of China: the Interim Measures on Generative AI Services (effective 15 August 2023) impose content-alignment obligations, algorithmic registration requirements, and mandatory security assessments. The United States, following the revocation of Executive Order 14110, currently lacks binding federal AI legislation. The result is a tripartite system in which the same model faces three fundamentally different legal architectures depending on the jurisdiction of deployment — and often all three simultaneously.
Analysis: Why National Frameworks Cannot Resolve Global Governance Gaps
The core problem is not regulatory diversity per se — different jurisdictions have always regulated technology differently. The problem is that AI systems are not bounded by jurisdiction. A large language model does not have a nationality. Its training data is multinational, its infrastructure is distributed, and its deployment is instantaneous across borders via API. This creates three structural governance failures. First, compliance asymmetry: multinational enterprises face irreconcilable obligations, generating incentives to locate operations in the least-regulated jurisdiction. Second, regulatory arbitrage: firms can structure their operations to exploit gaps between regimes, particularly where enforcement depends on physical presence. Third, governance vacuum: systems that operate between jurisdictions — cross-border AI-powered financial trading, automated content moderation for global platforms, AI-driven supply chain optimisation — fall in the gaps between national frameworks.
The International Dimension: Toward a Geneva-Based Architecture
The institutional infrastructure for multilateral AI governance already exists in embryonic form. The OECD AI Principles, adopted in 2019 and updated in May 2024, have 47 adherents and establish a normative baseline. The ITU’s AI for Good platform convenes technical and policy stakeholders annually in Geneva. The UN Secretary-General’s Roadmap for Digital Cooperation (2020) and the subsequent Global Digital Compact (2024) provide high-level political mandates. The WTO’s Joint Statement Initiative on Electronic Commerce, involving 90 members, is negotiating disciplines on data flows that directly affect AI deployment. What is missing is a binding institutional mechanism that converts these parallel processes into an integrated governance architecture — an entity capable of producing mutual recognition agreements for AI risk classification, harmonised conformity assessment procedures, and a dispute resolution mechanism for cross-border AI governance conflicts.
Policy Implications
First, the current trajectory leads to deepening fragmentation. Without a multilateral coordination mechanism, the EU, US, and China will continue to develop AI governance frameworks in isolation, with smaller states forced to choose between incompatible regulatory models. Second, the WTO offers a proven institutional template. The GATT-to-WTO trajectory — from a provisional agreement on trade in goods (1947) to a comprehensive institutional framework with binding dispute settlement (1994) — provides a realistic sequencing model for AI governance. Third, Geneva’s institutional density is a strategic asset. The co-location of the WTO, ITU, WIPO, WHO, and the UN Office at Geneva creates unique conditions for cross-institutional coordination. Fourth, the Global South must be included from the outset. The 118 UN member states absent from major non-UN AI governance initiatives represent a legitimacy deficit that will undermine any framework claiming universal applicability.
Recommendations
First, the OECD should convene a working group to develop mutual recognition criteria for AI risk classification systems, building on its existing AI Principles framework. Second, the WTO General Council should mandate an exploratory programme on AI governance within the Joint Statement Initiative on Electronic Commerce, with a specific focus on AI-related trade barriers and regulatory interoperability. Third, the ITU and ISO/IEC Joint Technical Committee 1 should accelerate the development of technical standards for AI system transparency, auditability, and risk assessment that can serve as the technical foundation for mutual recognition agreements. Fourth, the UN General Assembly should establish a mandate for a dedicated multilateral forum on AI governance, building on the Global Digital Compact and the Secretary-General’s proposals, with Geneva as its institutional seat. Fifth, Switzerland should leverage its position as host state to propose a Geneva AI Governance Framework initiative, modelled on its successful facilitation of the WTO Trade Facilitation Agreement negotiations.
The question is not whether AI will be governed multilaterally, but whether the multilateral system will act before fragmentation becomes irreversible. The institutional infrastructure exists. The normative convergence is documented. What is needed is political will, institutional sequencing, and the recognition that AI governance is not a technical problem with a national solution — it is a multilateral challenge that requires a multilateral architecture.