International AI safety cooperation looks impressive on paper, dozens of summits, declarations, and new institutes. But beneath the diplomatic pageantry, the coordination needed to manage frontier AI risks remains fragile. Despite growing concern about risks from frontier AI, leading corporations and governments are falling behind on ensuring that there are robust accountability, transparency, and fairness frameworks implemented.
States organised a number of multilateral summits and talks in the past 5 years which brought together governments, labs, and civil society to discuss frontier AI risks (Bletchley Park 2023, Seoul 2024, Paris 2025). One success is that Bletchley 2023 produced the Bletchley Declaration which was signed by 28 countries, including the US and China. This was a rare consensus on existential risks. The Declaration also created AI Safety Institutes in the UK, US, Singapore, and Japan as national nodes for technical safety evaluation.
But states aren't the only ones contributing to AI safety. The UN AI Advisory Body & High-Level AI Governance Process was developed and has become a key player. The Secretary-General's advisory body released recommendations in 2024 for global AI governance which pushed for creating an international scientific panel (IPCC-style) for AI risks. A General Assembly resolution in March 2024 focused on 'safe, secure and trustworthy AI' and was the first global consensus document.
Despite this, there are signs of cooperation. For one, the US and the UK have strong signs of cooperation via their AI Safety Institutes. Both institutes share technical staff, collaborate on model evaluation frameworks, and coordinate approaches to capability testing. Their partnership functions partly because both countries share similar regulation approaches and neither sees the other as a strategic competitor in AI development. US and EU cooperation efforts may be less optimistic, in part due to how each approaches AI governance. America prioritises innovation and the European Union prioritises regulation-first frameworks. The EU AI Act is a very comprehensive regulatory model that influences global debates; however, it does not have strong enforcement mechanisms yet. In contrast, the US would be unlikely to adopt a similar AI Act because of how structurally constraining it may seem to US AI labs.
Given Chinese and American dominance in frontier AI, their cooperation is under the most tension. Although there are some academic and informal dialogues between the two countries, government-level engagement is constrained by geopolitical tensions. There is a lot of tension around export controls on advanced chips and national security concerns. What drives this tension is the broader US-China rivalry over technological supremacy. Adding to it is that both sides don't know how far along the other is in the race toward AGI, and so both are accelerating their own domestic AI development. This is creating the race dynamic that makes international safety coordination necessary but challenging to achieve.
One limitation is that export controls on advanced chips, one of the most visible forms of coordination, don't prioritise safety over competition. The US has restricted exports of H100 and H200 GPUs to China and separately coordinated with the Netherlands and Japan to limit China's access to semiconductor manufacturing equipment. This fragments the global AI ecosystem. Experts argue that treating AI development as a national security competition while simultaneously trying to coordinate on safety thresholds is contradictory. Countries cannot verify each other's safety commitments when they view each other as adversaries who are racing toward strategic advantage.
The result is a patchwork of cooperation that works well among allies but falters where it matters most, between the primary AI powers. Without mechanisms to verify safety claims across geopolitical divides, international coordination remains more aspirational than operational. The question is whether this fragmented approach can hold as AI capabilities continue to advance, or whether a crisis will be needed to force deeper cooperation.