Over the next 5 years, the U.S.–China AGI relations are likely to escalate due to accelerated development of frontier AI and China's increasing technical independence from the U.S., including it becoming its own producer of chips.

As both nations continue to develop and deploy frontier systems in military and other applications as soon as they become available, unintentional escalation is a high risk. In this environment, technical failures could be misattributed as adversarial action and trigger disproportionate responses before either nation, or a third party can do an investigation. Given the already high-strung political relations between both nations, neutral and third party actors may lack the political space or technical access to develop the required technical mechanisms needed to help distinguish an AI malfunction from a deliberate attack.

I propose 2 interventions that can help both countries to develop a compute-based incident reporting protocol. Neutral third parties - ISO, IEEE, Switzerland, Singapore - can help the two nations navigate the political dynamics and develop shared benchmarks for high-risk AI capabilities for various scenarios (autonomous cyber attacks, biological design tools, strategic deception, autonomous weapons integration, etc.). The U.S. and China can use compute signatures to show unusual training runs and modifications right before the incident, or if the system was behaving within normal operational parameters. Immediate safety threats - AI-caused deaths or injuries or models that are self-replicating across infrastructures - should be reported within 24 hours. Other incidents, that happen - for example during large training runs (≥10²⁶ FLOPs) and cause $100M+ in harm - would be reported within 48 hours.

Since it is easier to develop the technical standards than to develop the political will to create them and to agree on what counts as dangerous, this strategy should be presented as operational safety cooperation and not arms control and be run as an 18-month pilot. To encourage adherence, tie the strategy to chip control and allow countries to get access to advanced AI chips if they participate in the incident reporting system. Within that 18-month window, China will likely increase its domestic chip production and become more self-sufficient. After the initial pilot, both nations will need to build other incentives before the chip leverage expires. For example, selling AI products in foreign markets can be tied to compliance and joint safety research access could encourage continued participation. Should the reporting system be successful, both nations should consider transitioning to a norm-based system. Incident reporting should become an international standard (like aviation safety reporting) and technical infrastructure that makes non-participation costly. The idea is to start the pilot now while chip leverage exists, and then build the institutional infrastructure and other incentives in parallel so that the system survives China's chip independence.