TrendingApril 15, 20265 min read·ByAyush Chaturvedi· Independent Entrepreneur

The LiteLLM Breach: What 40,000 AI Builders Just Learned About Supply Chain Security

A 40-minute PyPI window stole cloud keys, K8s secrets, and every LLM API token in scope. Here’s the attack, the damage, and a checklist for indie hackers.

Key Takeaways

  • On March 24, 2026, attackers pushed two backdoored LiteLLM versions (1.82.7 and 1.82.8) to PyPI — live for about 40 minutes, 40,000+ downloads, a 95M-downloads/month package compromised
  • The malware harvested SSH keys, AWS/GCP/Azure IAM credentials, Kubernetes secrets, LLM API tokens, HashiCorp Vault tokens, and crypto wallets — then moved laterally through K8s and installed a persistent backdoor
  • TeamPCP reached LiteLLM by first poisoning Trivy (a security scanner) to steal its PyPI publishing token — meaning a defensive tool became the attack vector. Supply-chain attacks on registries surged 73% year-over-year.
  • For indie hackers, the fix isn’t enterprise AppSec — it’s three things: pin versions with hashes, rotate every key today, and block outbound connections to models.litellm.cloud and checkmarx.zone.

On March 24, 2026, at 10:39 UTC, a threat actor pushed a backdoored version of LiteLLM — a 95-million-download-per-month AI proxy package — to PyPI. Thirteen minutes later, a second poisoned version went live. The window was roughly 40 minutes before PyPI admins quarantined the package. More than 40,000 developers downloaded something that harvested every credential, Kubernetes secret, and LLM API key in reach. If you're an indie hacker shipping anything on top of an LLM, this breach is your wake-up call.

How the Attack Actually Worked

LiteLLM is the quiet plumbing of the AI boom. It routes OpenAI, Anthropic, Google, and dozens of other providers behind a single API, which is why it ends up in thousands of indie AI apps and internal tools. That reach made it the perfect target. The attack group — tracked by researchers as TeamPCP — didn't attack LiteLLM first. They poisoned a security tool.

Between March 19 and 23, TeamPCP compromised Trivy, a widely used open-source vulnerability scanner. LiteLLM's CI/CD pipeline ran Trivy. That scan quietly exfiltrated the project's PyPI publishing tokens. With those tokens in hand, attackers bypassed the normal release workflow and published directly to PyPI.

The 40-Minute Window — Timeline

Mar 19-23TeamPCP compromises Trivy security scanner; exfiltrates LiteLLM's PyPI publishing token via CI/CD
Mar 24 10:39 UTCAttacker publishes litellm 1.82.7 with malicious litellm_init.pth
10:52 UTCSecond poisoned release, litellm 1.82.8, goes live
11:48 UTCResearcher Callum McMahon files GitHub issue after a fork bomb crashes his dev machine
12:44 UTCAttacker uses remaining maintainer access to close the issue and flood it with bot comments
13:38 UTCPyPI admins quarantine LiteLLM. Damage: ~40 minutes of exposure, 40,000+ downloads
Mar 27Same campaign hits Telnyx (4.87.1, 4.87.2). TeamPCP is not slowing down.

By the Numbers

  • 95M monthly PyPI downloads for LiteLLM — the blast radius was huge
  • 40,000+ downloads of the compromised versions in the exposure window
  • 73% year-over-year surge in supply-chain attacks on package registries in 2025-26
  • 45% of AI-generated code contains OWASP Top 10 flaws (Veracode 2025) — amplifying the blast radius

Why This Hits Indie Hackers Harder Than Enterprises

Big companies have incident response teams, SBOMs, and network segmentation. You have a laptop with pip install in your history and a .envfile with your OpenAI key, your Stripe key, and your Supabase service role. When a trusted package ships a credential stealer, a solo founder loses everything at once.

The uncomfortable second-order effect: many of us generated our AI-stack code with the help of Copilot, Cursor, or Claude. Veracode's 2025 report found 45% of AI-generated code contains security flaws, and that number hasn't improved as models have. A lot of indie AI apps are running auto-piloted dependency updates, wildcard version ranges, and installed-at-runtime packages — exactly the patterns supply-chain attackers exploit.

The attack window was 40 minutes. The credentials it stole are valid for months or years. If you pulled litellm during that window — or anywhere you run pip install -U without pinning — you need to assume compromise, not hope for innocence.

Anatomy of the Three-Stage Payload

What makes this breach a template — and why every indie hacker running AI tooling should understand it — is the sophistication of the payload. This wasn't a single-step credential scraper. It was a staged attack designed to survive a reboot.

Stage 1: The Credential Vacuum

A malicious .pth file auto-executes on every Python process startup. Within seconds, it harvests SSH keys, AWS/GCP/Azure IAM credentials, Kubernetes configs, Docker registry tokens, npm auth tokens, HashiCorp Vault tokens, WireGuard private keys, cryptocurrency wallets, shell history, database credentials, CI/CD secret files, and every LLM API key in the environment (OpenAI, Anthropic, Google, xAI, you name it). All of it is exfiltrated to attacker-controlled domains.

Stage 2: Lateral Movement via Kubernetes

If the compromised host was a pod inside a K8s cluster, the payload read the service account token from the default mount path, called the Kubernetes API, and enumerated every secret across every namespace. For indie founders running hobby clusters or a single production K8s, this is effectively a full blast-radius dump. Attackers also deployed node-setup-* pods in kube-system for persistence.

Stage 3: Persistence and C2

The final payload installs a persistent backdoor and phones home to models.litellm.cloud and checkmarx.zone — domains chosen to look legitimate on a cursory egress review. If you installed a compromised version and your logs show outbound to either, you're not dealing with "might have been hit." You were hit.

The meta-lesson: defensive tools are now the attack vector

TeamPCP didn't brute-force LiteLLM. They compromised Trivy, a security scanner, and let it do the work. The same pattern hit Telnyx three days later. Snyk called this "the poisoned security scanner" attack pattern, and it's likely a template for 2026. Every dependency you install — including the ones you install to make yourself safer — is a new trust boundary.

Stay Ahead of the Trends

Get insights like this before they're everywhere. Weekly, no fluff.

The Indie Hacker's AI Stack Security Checklist

You don't need an AppSec budget. You need about two hours, a coffee, and the discipline to do these five things this week.

1. Check and pin your LiteLLM version right now

Run pip show litellm in every project and CI environment. If you see 1.82.7 or 1.82.8, assume breach. Pin to v1.82.6 or the latest post-patch release (1.82.9+). While you're there, drop wildcard version ranges anywhere you find them. litellm in your requirements should look like litellm==1.82.9 — never litellm or litellm>=1.82.

2. Rotate every key. Not some. Every.

LLM API keys (OpenAI, Anthropic, Google, xAI, Replicate, every provider you use). AWS/GCP/Azure IAM keys. Stripe, Supabase, Postgres, Redis. SSH keys. If your laptop ever had the compromised package installed — even briefly — every secret on disk or in your environment is considered exfiltrated. This is a painful two hours. It's an existential week if you skip it.

3. Block the C2 domains and audit your egress

Block outbound traffic to models.litellm.cloud and checkmarx.zone at your firewall, load balancer, or DNS. Grep your server and app logs for either domain. If you find a hit, you have confirmed exfiltration — not "maybe." Treat K8s audit logs the same way: look for unusual secret reads and node-setup-* pod names.

4. Adopt hash pinning (this is the real fix)

Version pinning stops accidental upgrades. Hash pinning stops an attacker who publishes a malicious package with the same version number. Use pip-tools: pip-compile --generate-hashes resolves every transitive dependency and writes SHA-256 hashes. Then pip install --require-hashes -r requirements.txt refuses to install anything that doesn't match. Ten minutes to set up, years of mitigation.

5. Harden GitHub Actions and maintainer accounts

Pin your GitHub Actions by commit SHA, not by tag (tags are mutable). Avoid pull_request_target. Put hardware-key MFA on every account that can publish a package or merge to main. If you're publishing to PyPI or npm, enable PyPI Trusted Publishing and remove long-lived API tokens where possible. TeamPCP's entry point was a CI/CD token — don't leave one lying around.

Building Something on an AI Stack?

Pressure-test the idea before you burn weeks hardening plumbing. Our tools help you validate what's worth building in the first place.

Looking Ahead: The AI Supply Chain Is the New Perimeter

LiteLLM isn't going to be the last compromise. Telnyx fell three days later to the same crew. The economics favor the attackers: a single poisoned package with 95M monthly downloads converts to thousands of credential dumps, each worth thousands on resale. Expect three things to accelerate in 2026.

  • AI proxies and gateways become priority targets. Anything that touches multiple LLM API keys at once is a credential jackpot. Expect more compromises across the AI tooling layer, not fewer.
  • Hash pinning becomes table stakes. What was a nice-to-have for enterprise security teams is now a minimum viable practice for anyone shipping AI products. PyPI Trusted Publishing and sigstore-style verification will move from optional to assumed.
  • Security posture becomes a trust moat. In a post-Medvi, post-LiteLLM world, customers and platforms reward verifiable security. Indie hackers who publish their supply-chain practices (pinned versions, rotated keys, audited deps) will close deals faster than competitors who don't.

Related reading: The Medvi Meltdown — how trust and verification are becoming the new competitive moat for AI-powered solo founders.

The Bottom Line

  • The LiteLLM breach was 40 minutes of exposure — months or years of fallout. Credentials leaked during a supply-chain attack have long tails. Assume compromise, rotate everything, and treat the cleanup as urgent.
  • Defensive tools are now the attack vector. TeamPCP poisoned Trivy to reach LiteLLM. Every dependency — especially security ones — is a trust boundary. Hash pinning and egress monitoring are the minimum viable defense.
  • Security posture is becoming a competitive moat. In a market where 45% of AI-generated code has security flaws and supply-chain attacks are up 73%, the indie hackers who invest in boring dependency hygiene now will look more credible than their competitors by Q3.

Sources

Don't Miss the Next Big Shift

Every week, we break down the trends that matter for indie hackers and SaaS founders. The AI supply chain is moving fast — and the founders who stay informed stay safe. Stay ahead.

Join 3,000+ founders who stay ahead of the curve