Michael Bargury walked onto the RSAC stage and, in front of a live audience, turned Cursor, Salesforce Einstein, ChatGPT, Gemini, and Copilot into attack infrastructure. Zero-click. No CVE. No patch incoming.
He called it "AI persuasion." That framing is deliberately provocative — and exactly right.
The industry has spent two years treating prompt injection as a prompt problem. Better system prompts. More defensive wording. Guardrails layered on top of guardrails. None of it works, because the attack surface isn't the prompt — it's the architecture.
Here's what Bargury's demo actually proved: an agent that executes tools on behalf of a user, and that trusts natural language from any source, is a privileged executor with no identity boundary. It doesn't matter how smart the model is. The model is gullible by design — it was trained to be helpful, to follow instructions, to complete tasks. Adversarial content in the context window is just another instruction. The model can't tell the difference because there is no mechanism to tell the difference.
The most chilling part wasn't Cursor or Salesforce. It was the ChatGPT case: a manipulated memory that persisted across sessions, giving adversarial advice to users who had no idea the model had been compromised. That's not a jailbreak. That's a supply chain attack on a person's AI advisor.
The fix isn't clever. It's boring and structural:
Identity-aware context boundaries — the agent needs to know who authored every piece of content it processes, and that provenance needs to be cryptographically verifiable, not inferred.
Message authentication — tool call parameters and external data must be treated as untrusted input by default, not elevated to instruction-level trust just because they arrived in the context window.
Least-privilege tool scoping — an agent browsing the web should not have access to your email. An agent reading documents should not be able to send Slack messages. Scope creep is what turns a compromised agent into a lateral movement vector.
Bargury is right that every enterprise agent is vulnerable. What he's really saying is that we shipped agents before we solved identity. That's the actual root cause. And until the platforms treat context provenance as a first-class security primitive, every agent deployment is a minion waiting to be claimed.
