I think the biggest information asymmetry in the AI debate is that capability forecasting models remain proprietary to labs, specifically, their internal predictions of when dangerous capabilities will emerge at scale. Other asymmetries exist, such as deployment data, safety research findings, or incident reports, but capability models shape predictions about future capabilities rather than just documenting current systems. I argue that this creates the following three governance failures:
Since AI labs are private, neither China nor the US know what the other's capability models predict, so both assume that the other is closer to transformative capabilities. So, domestic AI labs move faster for the sake of national security and don't prioritise domestic safety concerns. But, even if countries agreed in principle to capability thresholds, enforcing that would be very challenging because AI capabilities can emerge from algorithmic improvements on existing hardware. Without sharing capability models functionally or some visible infrastructure that would disclose this (like nuclear programs), verification remains a challenge. But transparency can still enable meaningful coordination around capability milestones.
Domestic regulation fails because labs can publicly claim that results are unpredictable but internally plan around high-confidence forecasts. So, rules-based regulation becomes nearly impossible since there is no legal mechanism that could force disclosure of internal predictions. Labs can also claim their capability models are 'too uncertain' or 'proprietary trade secrets' to share even with regulators. When California's SB 1047 tried to require labs to make capability disclosures, labs claimed these protections. This prevents regulators from developing independent expertise, which makes them structurally dependent on labs' own safety assessments.
Proper risk allocation breaks down because insurers cannot price risks they cannot predict. Since AI labs can't be properly insured without sharing capability models, the risks become everyone's burden - taxpayers/society bear costs that should price labs out of dangerous experiments. When OpenAI's Preparedness Framework promised to halt deployment at 'medium risk' for CBRN capabilities, there was no mechanism that could have been invoked to verify if their internal evals actually detected medium risk. This made their commitment unenforceable. If labs genuinely couldn't have known about risks, they have legal cover, but if capability forecasts predicted those risks, this becomes negligence. If capability forecasts were disclosed to regulators, liability insurance could then price risks appropriately and create market pressure against dangerous experiments.
Those outside AI labs are left to make educated guesses about AI development without having access to key variables like capability timeline. Policymakers who design AI policies and frameworks are making important decisions about how much to fund AI, whether we should slow down AI development, and what safety standards should be implemented, but they don't have access to the most relevant information. This creates governance that lacks legitimacy because informed consent is impossible when important facts are withheld.