28.65 million hardcoded secrets on public GitHub. Up 34% year-over-year. At this point the headline number is almost background noise — secrets sprawl has been a known problem for years, and the trend line has never reversed.
What's different this year is where the growth is coming from.
AI-service credentials are up 81%, reaching 1.275 million exposed secrets. That alone would be notable. But the detail that actually matters: LLM orchestration, RAG pipeline, and vector storage credentials are leaking five times faster than core model provider keys (OpenAI, Anthropic, etc.).
Let that sit for a second. The keys to the plumbing — not the model itself — are outpacing everything else.
The reason isn't mysterious. Teams are shipping agentic systems at speed, bolting together orchestrators, memory stores, and tool integrations under delivery pressure. Credential hygiene is a second-order concern when you're trying to make the demo work. So secrets end up in .env files, in config snippets, in commit histories that get pushed and never cleaned.
The report also flags something I've watched happen in practice: Claude Code–assisted commits show a 3.2% secret leak rate, versus a 1.5% baseline. AI coding assistants make the problem structurally worse. Not because they're careless — because they're fast. They compress the distance between "idea" and "committed code," and that compression doesn't leave room for the credential-hygiene reflex to kick in.
Here's why the orchestration key problem is categorically worse than a model API key leak.
Stealing a model API key costs you money. Someone runs tokens on your bill. Painful, but contained. Stealing an orchestration API key — a LangSmith project key, a vector DB credential, a workflow engine token — grants control over the agent's hands. What it calls. What it retrieves. What it writes back. You're not just reading the agent's outputs; you're directing its actions.
8 of the 10 fastest-growing detection categories in GitGuardian's dataset are AI services. The infrastructure layer of agentic systems is becoming the highest-value attack surface in the stack, and most teams haven't updated their threat model to reflect that.
The fix isn't exotic. Secrets scanning in CI, short-lived credentials, vault-based injection, scoped permissions. The basics. But the basics aren't happening — not at the rate the credentials are accumulating. That gap is the risk.
