Blog
US-China AGI Relations and Diplomatic Interventions (2025-2030)
Over the next 1-5 years, I expect that the U.S.-China AGI relations will escalate because of three dynamics: (1) accelerated development of frontier AI, (2) China may increase its technical independence from U.S. export controls by producing domestic chips, (3) as AI systems become more autonomous they will become more opaque and harder to control. The main risk in US-China relations is unintentional escalation...
Read More →Strengthening AI Chip Export Controls: Recommendations for Department of Commerce
I recommend BIS adopt two interventions to close enforcement gaps: (1) tiered export licensing that is based on verification requirements, and (2) an embedded technical assessment capability at NIST AISI. BIS export AI chips controls use three mechanisms: performance-based thresholds which restrict advanced logic chips and high-bandwidth memory...
Read More →Will Chinese Companies Import Over 100K NVIDIA B30A Chips by End of 2026?
Chinese authorities have recently signalled a disinterest in importing NVIDIA H20 chips, mainly on security grounds. The NVIDIA B30A would be a more powerful chip than the H20, and the US government may allow B30As to be exported to China. My prediction is that there's a 35-45% chance Chinese companies import >100K B30As by end of 2026...
Read More →Thermal Signature Monitoring: A Cost-Effective Approach to GPU Verification
How can the Bureau of Industry and Security (BIS) verify, with reasonably high confidence, that GPUs in a given data center have not been physically transported? For already-installed GPUs that cannot rely on technical location verification, I propose a cheap, scalable solution using thermal signature monitoring...
Read More →Two Critical Failure Modes of AI Agents in Export Control Monitoring
If BIS deployed AI agents to continuously monitor shipping manifests, customs declarations, corporate filings, and other documents to detect export control violations, the system could fail badly or create new problems in distinct ways. Failure 1: Adversarial Prompt Tampering. Adversaries could tamper with AI prompts by embedding adversarial content...
Read More →The Current State of International AI Safety Cooperation
Over the past 5 years, international cooperation on AI safety has expanded significantly, with several overlapping mechanisms that aimed to bring governments, corporations, and civil society organisations together on the same page about how to safely build and deploy AI. Despite the growing concern about risks from frontier AI...
Read More →A Quarterly Evaluation Sprint Program for Frontier AI Safety
Small, like-minded countries—the UK, Japan, Canada, and Singapore—could build trust and shared capacity on frontier AI safety through a modest but practical cooperative step. Rather than duplicating the UN or G7, I propose a quarterly evaluation sprint program. These four countries already have established AI safety institutes...
Read More →The Biggest Information Asymmetry in AI: Proprietary Capability Forecasting Models
I think the biggest information asymmetry in the AI debate is that capability forecasting models remain proprietary to labs, specifically, their internal predictions of when dangerous capabilities will emerge at scale. Other asymmetries exist, such as deployment data, safety research findings, or incident reports, but capability models shape predictions...
Read More →Three Strategies to Address AI Information Gaps
The core challenge is building institutional capacity to enforce and process the information. I'd prioritise 3 strategies: (1) Mandatory Disclosure - Labs above a certain compute threshold should submit capability models to regulators before major training runs. (2) Liability-Based Economic Pressure - Develop safe harbors where labs that share capability models...
Read More →AI-Enabled Power Concentration: Comparing Risks in Authoritarian Regimes and Democracies
Both authoritarian regimes and democracies face risks from AI-enabled power concentration, though these threats manifest in different ways. In authoritarian systems, AI doesn't just make transitions messier, but it also makes them more likely to be violent. After a change in leadership, either via a coup or some other method...
Read More →Five Urgent Open Problems in Frontier AI Risk Management
With frontier AI, capability jumps will likely be unpredictable and non-linear and often invisible until they cross some deployment threshold. So the detection lag needs to be shorter than capability emergence timescales, but right now it's inverted. We need metrics that can signal dangerous capability before it actually happens...
Read More →Lessons from High-Frequency Trading for AI Governance
I chose high-frequency trading as a case study. As algorithmic tools became more popular between 2000s - 2010s, the finance world adopted autonomous trading tools (algorithmic decision rules and statistical models) into their trading systems to make and cancel orders at microsecond to millisecond timescales...
Read More →Children and AI Chatbots: Weighing Concerns About Regulation
AI chatbots are now able to engage in natural, emotional conversations with people. They can also convincingly simulate the personalities of specific people and characters. It has been reported that growing numbers of children and teenagers are spending significant time interacting with these chatbots...
Read More →Who Evaluates the Evaluators? Frameworks for Accreditation in Frontier AI Governance
A core challenge for accreditation is verifying an assessor's "independence from undue pressures", especially from the well-funded and influential AI developer being assessed. I propose a two-tiered system for ensuring and auditing this independence. Tier 1 includes pooled funding, revenue diversification, rotating networks...
Read More →