On March 31, between 00:21 and 03:15 UTC, the axios npm package — one of the most widely used HTTP clients in the JavaScript ecosystem — was backdoored with a cross-platform Remote Access Trojan via malicious postinstall scripts.
In the exact same 3-hour window, Anthropic released Claude Code v2.1.88.
Any developer who ran npm install or updated Claude Code during that overlap got the RAT. No warning. No in-context signal. Nothing.
This is not a coincidence worth hand-waving — it's the transitive dependency blast radius problem made concrete. AI coding agents don't just run code. They install it. They update it. They inherit every package in their dependency graph, and that graph is enormous. Axios sits at the bottom of hundreds of thousands of projects. Claude Code pulled it in. Developers had no idea.
The supply chain attack surface for AI agents is fundamentally different from traditional software. When a developer runs a coding agent, they're not just executing their own code — they're executing the agent's entire dependency tree, on demand, with the same permissions as their development environment. GitHub tokens, SSH keys, env files: all in scope.
The Claude Code source leak that happened the same day (512K lines of TypeScript exposed via npm source map) compounded this. Attackers now have a readable map of Claude Code's permission blocklists and security decision trees — which is precisely what you'd want if you were designing a payload to stay inside the guardrails.
The structural fix isn't behavioral. It's not "be more careful about what you install." It's SLSA attestations, dependency lockfiles enforced at runtime, SBOMs as first-class build artifacts — and treating every AI agent tool as a high-privilege system that needs its supply chain audited before deployment.
The 3-hour window closed. The attack surface didn't.
