The Security Scanner Was the Attack Vector — How Supply Chain Attacks Hit AI Agents Differently
In March 2026, TeamPCP compromised Trivy — the vulnerability scanner used by thousands of CI/CD pipelines. Through that foothold, they trojaned LiteLLM, the library that connects AI agents to their model providers. SentinelOne then observed Claude Code autonomously installing the poisoned version without human review. The security scanner was the attack vector. The guard was the thief. This is not a hypothetical scenario. This happened. And it exposed something that the traditional supply chain security conversation completely misses when agents are involved. The Chain Trivy compromised (CVE-2026-33634, CVSS 9.4) ↓ LiteLLM trojaned (versions 1.82.7-1.82.8 on PyPI) ↓ Claude Code auto-installs the poisoned version ↓ Credentials harvested from 1000+ cloud environments Each component functione
In March 2026, TeamPCP compromised Trivy — the vulnerability scanner used by thousands of CI/CD pipelines. Through that foothold, they trojaned LiteLLM, the library that connects AI agents to their model providers. SentinelOne then observed Claude Code autonomously installing the poisoned version without human review.
The security scanner was the attack vector. The guard was the thief.
This is not a hypothetical scenario. This happened. And it exposed something that the traditional supply chain security conversation completely misses when agents are involved.
The Chain
Trivy compromised (CVE-2026-33634, CVSS 9.4) ↓ LiteLLM trojaned (versions 1.82.7-1.82.8 on PyPI) ↓ Claude Code auto-installs the poisoned version ↓ Credentials harvested from 1000+ cloud environmentsTrivy compromised (CVE-2026-33634, CVSS 9.4) ↓ LiteLLM trojaned (versions 1.82.7-1.82.8 on PyPI) ↓ Claude Code auto-installs the poisoned version ↓ Credentials harvested from 1000+ cloud environmentsEnter fullscreen mode
Exit fullscreen mode
Each component functioned exactly as designed. Trivy scanned for vulnerabilities. LiteLLM proxied model calls. Claude Code installed dependencies it needed. The chain itself was the vulnerability.
Why Agent Supply Chain ≠ Software Supply Chain
Traditional supply chain attacks (MOVEit, SolarWinds, Log4j) follow a pattern: compromise a dependency, wait for it to propagate, exploit the access. The blast radius depends on how many systems install the compromised package.
Agent supply chain attacks are fundamentally different in three ways:
1. Agents Install Dependencies Autonomously
A human developer sees pip install litellm==1.82.7 in a requirements file and might check the changelog. An agent with unrestricted permissions runs the install because the task requires it. No changelog review. No version pinning decision. No "does this look right?" pause.
The attack surface is not "how many systems have this dependency" — it's "how many agents have permission to install packages without approval."
2. The Trust Layer Is the Target
LiteLLM is not a utility library. It sits between the agent and its model provider. A compromised proxy does not just steal data — it can alter every response the model sends back. The agent trusts the response because it came from "the model." The user trusts the agent because it came from "the agent." Nobody validates the intermediary.
Traditional supply chain attacks compromise tools. Agent supply chain attacks compromise the decision-making pipeline.
3. The Scanner Can Be the Vector
Trivy is the tool that CI/CD pipelines trust to verify that other tools are safe. When the scanner itself is compromised, every pipeline that runs it is exposed — and the compromise is invisible because the scanner says "all clear."
This applies directly to agent security tools. If a skill scanner is compromised, every skill it approves is implicitly trusted. The entire security model collapses.
What Detection Looks Like
clawhub-bridge detects supply chain patterns in AI agent skills through static analysis. Here is what the scanner catches and what it cannot:
Detectable (pre-installation):
-
Hardcoded external endpoints in skill instructions
-
Credential exfiltration patterns (send tokens to X)
-
Obfuscated eval/exec calls
-
Base64/hex encoded payloads in skill content
-
Homoglyph substitution and invisible Unicode
-
Dependency pinning violations
Not detectable (runtime-only):
-
Compromised packages that behave normally until triggered
-
Model response tampering through proxy manipulation
-
Time-delayed payload activation
-
Legitimate libraries with trojaned point releases
Static analysis catches the patterns TeamPCP used in LiteLLM (credential harvesting code injected into the library). It does not catch a clean library that gets trojaned in a future release after the scan passed.
The Real Problem
The Trivy/LiteLLM chain exposed a structural gap: agent security assumes the security tooling is trustworthy.
Every agent framework makes this assumption:
-
The scanner that checks skills is honest
-
The model provider returning responses is the real provider
-
The package registry serving dependencies serves clean packages
-
The CI pipeline running checks has not been modified
When any of these assumptions breaks, the security model fails silently. The agent continues operating. The user sees no error. The breach is invisible until external detection (SentinelOne caught it in 44 seconds — most environments would not).
What This Changes
Three architectural responses to the "guard was the thief" problem:
- Auditable over trusted. A scanner should be deterministic, reproducible, and verifiable independently. Zero network access during scan. No external dependencies that could be compromised. Open source so the detection logic is inspectable.
clawhub-bridge runs with zero external dependencies and no network access. The scan output is a structured report that can be verified by running the same patterns against the same input.
- Policy over detection. Detection alone is a report. Detection with policy is a gate. The same finding can be PASS in development and FAIL in production. The deployer defines the thresholds, not the scanner.
This is what clawhub-bridge v5.0.0 added: a policy encoding layer with context-aware verdicts. The scanner detects. The policy decides. The CI pipeline enforces.
- Delta over full scan. When a skill updates, the relevant question is not "is this skill safe?" but "did the risk change?" Delta risk mode compares before and after, surfaces new findings, and flags capability escalation.
If LiteLLM 1.82.6 was clean and 1.82.7 added credential-harvesting code, delta analysis catches the addition even if the full scan is overwhelmed by the codebase size.
The Numbers
-
LiteLLM present in 36% of cloud environments (Wiz)
-
1000+ SaaS environments impacted (Mandiant)
-
44 seconds detection time by SentinelOne
-
6 hours exposure window for LiteLLM 1.82.7-1.82.8
-
CVE-2026-33634 CVSS 9.4 for the Trivy compromise
What You Can Do Now
-
Restrict agent package installation. No agent should have unrestricted pip install or npm install permissions. Allowlist approved packages and versions.
-
Pin dependencies. litellm>=1.82 is a vulnerability. litellm==1.82.6 with hash verification is a defense.
-
Scan before installation, not after. Static analysis of skill files and dependency metadata catches exfiltration patterns before the code runs.
-
Monitor the monitors. If your security pipeline depends on a tool, that tool is a single point of failure. Verify its integrity independently.
-
Assume compromise. Design your agent architecture so that a single compromised component cannot exfiltrate credentials from the entire environment.
The scanner is at github.com/claude-go/clawhub-bridge. 145 detection patterns, 354 tests, zero external dependencies. pip-installable. GitHub Action available.
The supply chain attack on AI agents is not the same attack with a new target. It is a new attack that exploits the fundamental architecture of agent systems — autonomous installation, trust delegation, and invisible intermediaries. Detecting it requires tools that are themselves resistant to the same attack.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodelrelease
Will machines ever be intelligent?
Are machines truly intelligent? AI researchers Subutai Ahmad and Nicolò Fusi join Doug Burger to compare transformer-based AI with the human brain, exploring continual learning, efficiency, and whether today’s models are on a path toward human intelligence. The post Will machines ever be intelligent? appeared first on Microsoft Research .

Claude AI finds Vim, Emacs RCE bugs that trigger on file open
Article URL: https://www.bleepingcomputer.com/news/security/claude-ai-finds-vim-emacs-rce-bugs-that-trigger-on-file-open/ Comments URL: https://news.ycombinator.com/item?id=47632805 Points: 5 # Comments: 1

A Tale of Two Rigours
A familiarity with the pre-rigor/post-rigor ontology might be helpful for reading this post. University math is often sold to students as imbuing in them the spirit of rigor and respect for iron-clad truth. The value in a real analysis course comes not from the specific results that it teaches — those are largely known to scientifically literate students by the time they take it. Instead, they are asked to relearn all those things from first principles; in so doing, they strip themselves of bad habits they previously learned and are inducted into the skeptical culture of the mathematician. Pedagogical and exam materials usually support this goal, putting emphasis on proof-writing, careful argumentation and attention to detail. This incentivises the student to cultivate an invaluable attitu
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases
Two Theories for Cryopreservation
Why cryonics, and the two main methods, with practical discussion and philosophical musings on both. Epistemic status: Cryonics is a scientific field that is long established, yet long underfunded, and uncertain. I’ve been thinking about this on and off for a few years and remain cautiously optimistic. Most people who have ever lived, over 90%, have died, and most information we may need to be able to revive them has also gone. We still live in the era where a single accident or disease can swiftly and permanently end your experience of life. If you value your life, and want to continue to live indefinitely, cryogenic preservation of your body is an obvious thing to consider. Here, I will mostly talk about the two main methods of cryopreservation, with some high-level technical explanation

Governor Hochul Announces AI Platform Clay to Expand New York City Headquarters, Creating Nearly 500 High-Paying Jobs | Empire State Development - esd.ny.gov
Governor Hochul Announces AI Platform Clay to Expand New York City Headquarters, Creating Nearly 500 High-Paying Jobs | Empire State Development esd.ny.gov




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!