If BIS deployed AI agents to continuously monitor shipping manifests, customs declarations, corporate filings, and other documents to detect export control violations, the system could fail badly or create new problems in distinct ways.

Failure 1: Adversarial Prompt Tampering

Adversaries could tamper with AI prompts by embedding adversarial content that causes AI to misclassify. For example, shell companies could include specific linguistic patterns, formatting quirks, or metadata signatures that the agent learns to associate with 'legitimate' transactions during training. If agents autonomously investigate and clear transactions, bad actors could inject prompts in shipping documents or customs forms that cause agents to misinterpret their instructions. This matters because it creates an adversarial arms race where adversaries constantly look for weaknesses in the AI. Some ways to mitigate this are to constantly red-team agents with synthetic evasion attempts before they're deployed; if an agent encounters document patterns outside their training distribution confidence intervals, require mandatory human review.

Failure 2: Diffused Accountability

If BIS becomes structurally dependent on AI and doesn't have human expertise to validate agent outputs and doesn't have technical capacity to step in for the AI, BIS will lose its ability to audit agent performance or recognize systematic failures. When AI agents make mistakes, it becomes unclear who is accountable – BIS or the AI developer. If an agent autonomously clears a transaction that later proves to be a major violation (like the TSMC-Huawei case), who is responsible? This matters: if BIS can't evaluate whether agents are correctly identifying violations, then it risks institutionalising unverified automation. Some ways to mitigate this would be to have AI produce human-readable audit trails showing reasoning chains for every decision or have humans approve any transaction >$10M or involving entities on watchlists.