Blog

Thoughts on AI policy, governance, and emerging technology
November 2025 Diplomatic Policy

US-China AGI Relations and Diplomatic Interventions (2025-2030)

Over the next 1-5 years, I expect that the U.S.-China AGI relations will escalate because of three dynamics: (1) accelerated development of frontier AI, (2) China may increase its technical independence from U.S. export controls by producing domestic chips, (3) as AI systems become more autonomous they will become more opaque and harder to control. The main risk in US-China relations is unintentional escalation...

Read More →
US-China Relations AGI Diplomatic Policy AI Safety
November 2025 Export Controls

Strengthening AI Chip Export Controls: Recommendations for Department of Commerce

I recommend BIS adopt two interventions to close enforcement gaps: (1) tiered export licensing that is based on verification requirements, and (2) an embedded technical assessment capability at NIST AISI. BIS export AI chips controls use three mechanisms: performance-based thresholds which restrict advanced logic chips and high-bandwidth memory...

Read More →
Export Controls AI Chips Policy Memo BIS
November 2025 Forecasting

Will Chinese Companies Import Over 100K NVIDIA B30A Chips by End of 2026?

Chinese authorities have recently signalled a disinterest in importing NVIDIA H20 chips, mainly on security grounds. The NVIDIA B30A would be a more powerful chip than the H20, and the US government may allow B30As to be exported to China. My prediction is that there's a 35-45% chance Chinese companies import >100K B30As by end of 2026...

Read More →
Export Controls China NVIDIA Forecasting
November 2025 Technical Solutions

Thermal Signature Monitoring: A Cost-Effective Approach to GPU Verification

How can the Bureau of Industry and Security (BIS) verify, with reasonably high confidence, that GPUs in a given data center have not been physically transported? For already-installed GPUs that cannot rely on technical location verification, I propose a cheap, scalable solution using thermal signature monitoring...

Read More →
Export Controls Verification Technical Solutions BIS
November 2025 Risk Analysis

Two Critical Failure Modes of AI Agents in Export Control Monitoring

If BIS deployed AI agents to continuously monitor shipping manifests, customs declarations, corporate filings, and other documents to detect export control violations, the system could fail badly or create new problems in distinct ways. Failure 1: Adversarial Prompt Tampering. Adversaries could tamper with AI prompts by embedding adversarial content...

Read More →
AI Agents Export Controls Risk Analysis BIS
October 2025 Governance

The Current State of International AI Safety Cooperation

Over the past 5 years, international cooperation on AI safety has expanded significantly, with several overlapping mechanisms that aimed to bring governments, corporations, and civil society organisations together on the same page about how to safely build and deploy AI. Despite the growing concern about risks from frontier AI...

Read More →
International Cooperation AI Safety Governance Diplomacy
October 2025 Policy Framework

A Quarterly Evaluation Sprint Program for Frontier AI Safety

Small, like-minded countries—the UK, Japan, Canada, and Singapore—could build trust and shared capacity on frontier AI safety through a modest but practical cooperative step. Rather than duplicating the UN or G7, I propose a quarterly evaluation sprint program. These four countries already have established AI safety institutes...

Read More →
International Cooperation AI Safety Model Evaluation Policy Framework
October 2025 Transparency

The Biggest Information Asymmetry in AI: Proprietary Capability Forecasting Models

I think the biggest information asymmetry in the AI debate is that capability forecasting models remain proprietary to labs, specifically, their internal predictions of when dangerous capabilities will emerge at scale. Other asymmetries exist, such as deployment data, safety research findings, or incident reports, but capability models shape predictions...

Read More →
AI Governance Transparency Information Asymmetry Policy
October 2025 Policy Strategy

Three Strategies to Address AI Information Gaps

The core challenge is building institutional capacity to enforce and process the information. I'd prioritise 3 strategies: (1) Mandatory Disclosure - Labs above a certain compute threshold should submit capability models to regulators before major training runs. (2) Liability-Based Economic Pressure - Develop safe harbors where labs that share capability models...

Read More →
AI Governance Policy Strategy Regulation Transparency
October 2025 Democracy

AI-Enabled Power Concentration: Comparing Risks in Authoritarian Regimes and Democracies

Both authoritarian regimes and democracies face risks from AI-enabled power concentration, though these threats manifest in different ways. In authoritarian systems, AI doesn't just make transitions messier, but it also makes them more likely to be violent. After a change in leadership, either via a coup or some other method...

Read More →
AI Governance Democracy Authoritarianism Power Concentration
September 2025 Risk Management

Five Urgent Open Problems in Frontier AI Risk Management

With frontier AI, capability jumps will likely be unpredictable and non-linear and often invisible until they cross some deployment threshold. So the detection lag needs to be shorter than capability emergence timescales, but right now it's inverted. We need metrics that can signal dangerous capability before it actually happens...

Read More →
AI Risk Frontier AI Risk Management AI Safety
September 2025 Case Study

Lessons from High-Frequency Trading for AI Governance

I chose high-frequency trading as a case study. As algorithmic tools became more popular between 2000s - 2010s, the finance world adopted autonomous trading tools (algorithmic decision rules and statistical models) into their trading systems to make and cancel orders at microsecond to millisecond timescales...

Read More →
Technology Governance Financial Regulation Case Study Policy Lessons
September 2025 AI Ethics

Children and AI Chatbots: Weighing Concerns About Regulation

AI chatbots are now able to engage in natural, emotional conversations with people. They can also convincingly simulate the personalities of specific people and characters. It has been reported that growing numbers of children and teenagers are spending significant time interacting with these chatbots...

Read More →
AI Ethics Children Chatbots Regulation
September 2025 Audit

Who Evaluates the Evaluators? Frameworks for Accreditation in Frontier AI Governance

A core challenge for accreditation is verifying an assessor's "independence from undue pressures", especially from the well-funded and influential AI developer being assessed. I propose a two-tiered system for ensuring and auditing this independence. Tier 1 includes pooled funding, revenue diversification, rotating networks...

Read More →
Accreditation AI Governance Audit Policy