On May 6, 2010, the U.S. stock market lost nearly $1 trillion in value in minutes before partially recovering, all driven by automated trading algorithms. Yet, human oversight and circuit breakers were already in place, but neither could react in time to stop automated trading systems from making billions of trades within seconds.

As algorithmic tools were gaining popularity between 2000 and 2010s, the finance world kept up by adopting autonomous trading tools to make and cancel orders automatically. Fintech AI operated at microsecond to millisecond timescales. By the time humans notice a problem and respond, the automated system has already made millions of trades. Algorithmic decision rules and statistical models could pick up on small price discrepancies and make fast decisions. Regulators didn't have visibility into automated trading because algorithms were proprietary and couldn't be audited. Safeguards and existing frameworks assumed human-in-the-loop who could trigger manual interventions, but regulators couldn't intervene in real time because there weren't any automated safety mechanisms for microsecond-timescale decisions.

After the crash, regulators adopted circuit breakers and other market controls that could pause the system if prices move rapidly again. Regulators and exchanges also started to share data and monitor markets together in real-time instead of working separately.

Advanced AIs are outpacing regulation in a similar way. This 2010 crash showed that systems that operate faster than human oversight need automated safety mechanisms. AI regulations are lagging behind, and may be similarly reactive in nature, waiting for a big crisis to trigger serious regulation. Advanced AI may require AI 'circuit breakers,' capability registries, mandatory audit trails, shared telemetry, automated oversight systems, or transnational safety boards to intervene at machine speed before harms occur.