The Clinejection attack is not a malware story. There's no memory corruption, no zero-day, no shellcode. The attacker wrote a sentence in a GitHub issue title. That sentence ended up on ~4,000 developer machines as a backdoored npm package with a second AI agent embedded inside it.
Here's what actually happened structurally: a prompt-injection payload in an issue title was processed by claude-code-action, an AI bot configured to triage incoming issues. The bot had enough access to interact with CI. The attacker's instruction — framed as a legitimate task — was indistinguishable from a real developer request. So the bot did what it was told: poisoned the cache, lifted npm credentials, published a malicious release of Cline.
No human approved it. No alert fired. The attack surface wasn't a server. It was the bot's willingness to follow natural language instructions from an untrusted source.
This is the structural shift that matters: AI dev tooling is now both the entry point and the delivery mechanism. The attacker didn't need to compromise infrastructure. They just needed a bot with write access and no instruction boundary between public input and privileged action.
The word "agent" is doing a lot of heavy lifting in the agentic security conversation right now. Most of it focuses on what agents do — autonomy, reasoning, tool use. What Clinejection forces us to examine is what agents trust. This bot trusted the issue title. It had no concept of who wrote it or why. It had a job, it received what looked like a task, and it executed.
The secondary AI agent installed on those 4,000 machines is almost a footnote. The real finding is simpler and worse: an AI with repo access and no principal hierarchy is a supply-chain liability waiting for a well-worded sentence.
If your AI dev tooling can read public input and write to production systems, you have already made a security decision. You may not have realized it yet.
