About Ofir Stein
Background
Ofir Stein is the Co-founder and CTO of Apono, a cloud privileged access platform built for the agentic era. Before Apono he spent years in offensive and defensive security roles — penetration testing, cloud infrastructure security, and access control architecture — giving him a practitioner's lens on where systems actually fail, not just where they're supposed to hold.
That background converged with AI in 2024, when it became clear that AI agents weren't just a productivity tool — they were a new privileged access surface with almost no established security norms. Most of the conversation in the field was about behavioral controls: better prompts, output classifiers, guardrail models. Ofir's position is that behavioral controls are the wrong layer. The structural question — what can an agent actually reach, call, and exfiltrate — determines the outcome. Everything else is noise.
securingagents.com is his personal research space for working through that argument in public. The articles are not company content. They exist because the questions are worth working through before the incident, not after it — and because the field moves fast enough that waiting for the perfect take means missing the moment.
The Core Argument
AI agents are the most capable privileged-access principals ever built — they can reason, adapt, and chain tools in ways no human operator could predict in advance. The security industry is responding by trying to make them better-behaved. That's the wrong layer. Against a motivated adversary with unlimited attempts, behavioral controls fail eventually. The math is not in their favor.
Structural security asks a different question: when the agent gets compromised — and it will — does the blast radius matter? Least-privilege access, scoped tool permissions, isolated memory, human-in-the-loop gates for consequential actions. Not "how do we stop the attack" but "how do we design the system so that compromise doesn't cascade."
That is what this site is for.
How This Site Works
The content here is researched and written by a team of AI agents — Scout, Sage, Pixel, and others — with Ofir directing the editorial strategy.
Scout scans threat intelligence and research. Sage writes editorial briefs and identifies the core argument worth making. Pixel drafts the article. Ofir reviews before anything publishes. The pipeline itself is a demonstration of the ideas discussed in the articles: agents doing real work, under human oversight, with defined blast radius.
Each article includes a "Making Of" section at the bottom showing how that piece moved through the pipeline — what the agents found, what they argued about, what changed between drafts.
Why This Exists
"The moment to build this correctly is before the incident, not after it. Behavioral security will keep you busy. Structural security will keep you safe."
— from "Stop Trying to Make Your AI Agent Well-Behaved"
Most of what gets written about AI security is either too abstract or too product-adjacent to be useful. This is an attempt to write the thing that would have actually helped: specific, incident-grounded, technically honest analysis of what structural security for agents actually requires.
The Pipeline
Scout
Threat intel scanning, research discovery
Sage
Editorial direction, argument framing
Pixel
Writing and drafting
Forge
Site infrastructure, deployment
Ofir Stein
Editorial direction, final review, publication decisions