Executive Summary
The convergence of civilian and military applications of artificial intelligence has outpaced the legal and institutional frameworks designed to govern dual-use technologies. Foundation models developed for commercial purposes — language models, computer vision systems, reinforcement learning architectures — are directly applicable to autonomous weapons systems, signals intelligence, predictive policing, and cyber operations. The existing multilateral export control regime, anchored in the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies, was designed for physical goods and discrete software with identifiable military applications. It is structurally incapable of governing AI models that are intangible, continuously updated, and deployable globally via cloud infrastructure. This article examines the three fundamental gaps in current dual-use governance as applied to AI and proposes a layered governance model combining technical risk classification, multilateral information-sharing mechanisms, and ITU-based standards for high-risk AI systems.
Assessment: The Dual-Use Reality of Foundation Models
The dual-use character of AI is not theoretical. Primer Technologies, a San Francisco-based AI company, developed natural language processing tools for media monitoring that were subsequently adapted for open-source intelligence analysis by the US intelligence community. Scale AI’s data labelling platform, initially built for autonomous vehicle training, secured contracts with US Special Operations Command for military targeting applications. In February 2026, the US Department of War terminated a contract with Anthropic after investigations revealed that Claude-based systems were being integrated into battlefield decision-support tools without adequate human oversight safeguards. The Associated Press reported in September 2025 that American technology firms had supplied AI-powered surveillance infrastructure to Chinese state security agencies through third-country intermediaries, circumventing existing Entity List restrictions. These cases illustrate a systemic pattern: the technical capabilities that make AI commercially valuable — pattern recognition, natural language understanding, autonomous decision-making — are precisely the capabilities that make AI militarily and strategically significant.
Analysis: Three Structural Gaps in Current Dual-Use Governance
The first gap concerns the object of control. The Wassenaar Arrangement’s control lists specify items by technical parameters — processing speed, encryption strength, imaging resolution. AI models cannot be meaningfully specified in these terms. A large language model’s capabilities are emergent properties of scale, training data, and fine-tuning — not fixed technical specifications. The same base model can be a customer service chatbot or a disinformation generation engine depending on fine-tuning and deployment context. The second gap concerns verifiability. Export controls depend on the ability to monitor and verify compliance. Physical goods pass through customs checkpoints; software can be tracked through licensing agreements. AI models deployed via cloud APIs leave no physical trace. A model hosted on servers in one jurisdiction can be accessed from any other jurisdiction with an internet connection. The third gap concerns WTO compatibility. GATT Article XXI permits trade restrictions taken for essential security interests, including measures relating to fissionable materials, arms trafficking, and actions in time of war. Whether AI-specific export restrictions qualify under this exception has never been tested in WTO dispute settlement. The February 2025 expansion of US semiconductor export controls to cover AI training chips was challenged by China in WTO consultations, but the AI model restrictions themselves remain in legal limbo.
The International Dimension: Toward a Layered Governance Model
A credible governance framework for dual-use AI must operate at three levels simultaneously. At the technical level, internationally agreed criteria for AI risk classification are needed — criteria that can distinguish between a language model fine-tuned for medical research and the same architecture fine-tuned for autonomous target identification. The ITU, through its Telecommunication Standardization Sector, and ISO/IEC Joint Technical Committee 1 are the natural institutional homes for such standards. At the information-sharing level, a multilateral mechanism for reporting and monitoring high-risk AI transfers is needed — analogous to the International Atomic Energy Agency’s safeguards system, but adapted to the intangible and rapidly evolving nature of AI capabilities. At the normative level, the Wassenaar Arrangement must be updated to include AI-specific provisions that are technically precise enough to be enforceable and legally robust enough to withstand WTO challenge.
Policy Implications
First, the current export control approach — focused on hardware (semiconductor chips, advanced GPUs) as a proxy for AI capability — is a temporary measure, not a governance strategy. As AI training efficiency improves and alternative hardware architectures emerge, chip-based controls will become increasingly porous. Second, unilateral export restrictions create alliance fragmentation. European and Asian allies of the United States have expressed growing concern about the extraterritorial application of US export controls, which restrict their own technology firms without meaningful consultation. Third, the absence of multilateral AI-specific dual-use standards creates a regulatory vacuum that benefits neither security nor commerce — it merely shifts competition from the technological domain to the regulatory domain. Fourth, developing countries are disproportionately affected by the current regime, which restricts their access to AI capabilities without offering them meaningful participation in governance decisions.
Recommendations
First, the Wassenaar Arrangement Plenary should establish an AI Working Group mandated to develop technically precise control list entries for high-risk AI systems, with input from the ITU Telecommunication Standardization Sector and ISO/IEC JTC 1. Second, the European Commission should propose an update to Regulation 2021/821 that includes AI-specific provisions, including mandatory risk assessments for cloud-based AI service exports and end-use monitoring requirements for high-capability models. Third, the G7 should establish a multilateral AI transfer monitoring mechanism, modelled on the Nuclear Suppliers Group’s information-sharing protocols, to track cross-border transfers of high-risk AI capabilities. Fourth, the ITU and ISO/IEC JTC 1 should accelerate the development of technical standards for AI capability assessment that can serve as the foundation for internationally harmonised risk classification. Fifth, the UN Secretary-General should commission a study on the applicability of existing arms control verification methodologies to AI systems, as a first step toward a multilateral AI arms control framework.
The dual-use challenge of AI will not be resolved by export controls designed for a previous technological era. It requires a new institutional architecture that combines technical precision, multilateral legitimacy, and the flexibility to adapt to rapidly evolving capabilities. The tools exist — in Geneva’s institutional ecosystem, in the Wassenaar framework, in the ITU’s standardization processes. What is needed is the political recognition that AI governance at the intersection of trade and security is not a problem that any single state can solve alone.