I recommend BIS adopt two interventions to close enforcement gaps: (1) tiered export licensing that is based on verification requirements, and (2) an embedded technical assessment capability at NIST AISI.

Current Controls: Intent and Limitations

BIS export AI chips controls use three mechanisms: performance-based thresholds which restrict advanced logic chips and high-bandwidth memory, end-use controls which target supercomputer and military applications, and expanded Foreign Direct Product Rule coverage of semiconductor manufacturing equipment. Since the "AI Diffusion Rule" was rescinded in May 2025, controls focus on maintaining allied access while imposing targeted restrictions for China.

However, there are three critical limits. First, BIS cannot effectively track chip location or deployment after export. China has reportedly acquired tens of thousands of H100-class chips through indirect channels despite controls. Second, current thresholds target chips optimized for training workloads, but AI development is shifting toward inference-heavy architecture. Third, control updates need 6-12 months from when the gap was identified to be implemented.

Recommendation 1: Tiered Export Licensing Based on Verification Requirements

A tiered approach would restructure controls around tiers based on end users' willingness to accept verification. Tier 1 would provide expedited licensing for advanced chips (e.g.: B200) to end users who would accept quarterly aggregate compute usage reporting, remote chip inventory verification, and network isolation. Tier 2 would provide standard licensing for current-generation chips to end users who would accept on-site inspections. Tier 3: end users who refuse verification would face presumptive denial.

Recommendation 2: Establish Rapid Technical Assessment Capability

BIS lacks embedded technical expertise to anticipate how AI architectural development creates control vulnerabilities before adversaries exploit them. I propose to establish a small technical team (5-7 staff) in NIST's AI Safety Institute to stress-test regulations against emerging AI developments. This 'regulatory red team' could reduce the current 6-12 month implementation lag to weeks by continuously stress-testing controls against new AI developments.