When you’ve been doing something long enough, you tend to forget about the things that scared you when you first started. It’s not really apathy or carelessness, it’s falling into a comfort zone as you solve increasingly large problems. You build habits to protect yourself: you check your side mirrors before merging, ensure your knives are sharp before your shift, and double-check your .gitignore file. Eventually these things become muscle memory and you don’t have to think about looking before you merge.
Supply-chain attacks are nothing new (and it could be argued that the Internet itself is a response to a supply-chain attack. During the Vietnam War, the US Army realized point-to-point radio communications can fail, stranding soldiers and putting assets at risk. TCP/IP (the core protocol driving the Internet) was originally designed in the 1970’s by DARPA scientists Vinton Cerf and Robert Kahn; they shared a paper in 1974, A Protocol for Packet Network Intercommunication. The protocol was intended to enable redundant communication across multiple points, avoiding failure across satellite, radio, and ARPANET should one or more nodes be removed.
A Recent History of Supply-Chain Attacks
In March 2024, the security community witnessed the slow-motion horror show of the xz-utils backdoor. A threat actor spent years cultivating trust within an open source project; building a reputation and taking on maintainer responsibilities, and then slipped a backdoor into a compression library embedded in half the Linux distributions on the planet. A Microsoft engineer noticed SSH logins were half a second slower than they should have been and uncovered the whole thing.
Fast forward to September 2025, when the npm ecosystem got hit by a worm dubbed Shai-Hulud (after the giant sandworms of Dune). If you’re unaware, npm is the Node Package Manager: a commonly-used tool to manage Node packages. These are JavaScript libraries to provide support for commonly desired website functionalities, like data manipulation, form validation, and the like (and so much more, like React and Vue). The point being, npm is everywhere, and the libraries build upon one another, so oftentimes you’ll have dependencies three or four levels deep (if not more). The Shai-Hulud worm started with a phishing attack on a single package maintainer. After it had that foothold, it identified every other package that developer maintained, injected malicious code, and published compromised versions. As you might have guessed, this spread exponentially without any further intervention from the attacker. Over 500 packages were compromised, CISA issued an advisory, and two months later Shai-Hulud 2.0 arrived with a worse payload: tens of thousands of GitHub repositories and a dead man’s switch to destroy your home directory if it got cut off.
About six weeks ago, one of my clients (who primarily uses self-hosted WordPress for their websites) was the victims of a ClickFix attack. Their websites were serving a CAPTCHA that asked users to prove they were human, then hit a series of keys which would open up the Windows “run” prompt, paste the contents of the clipboard, and execute malware on the user’s system. After nearly four days (and the entirety of my weekend) cleaning up servers, restoring from backups, resetting everyone’s passwords (annoying customers in the process), and analyzing the logs, it became clear that the systems were compromised via an upstream attack from ManageWP. An attacker used this trusted system to send the plugin to multiple websites.
Two days ago (March 31, 2026) axios was compromised. Axios has approximately 100 million weekly downloads and over 174,000 dependent packages. Attackers gained control of the lead maintainer’s account, published two backdoored versions that installed a cross-platform Remote Access Trojan (RAT) on macOS, Windows, and Linux, and did it inside a 39-minute window. The compromised versions deleted themselves after deployment, erasing evidence of the infection from node_modules. By the time most developers had their morning coffee, the damage was done.
Before you begin to think “this is just a JavaScript and PHP problem, why should I care?” consider the LiteLLM supply chain attack on March 24, 2026. LiteLLM is a Python library that does one specific thing: it sits between your application and every LLM provider you use: OpenAI, Anthropic, Bedrock, whatever, and it routes requests through a unified interface. LiteLLM is infrastructure for AI development. It has, by design, access to every API key in your environment. A threat group called TeamPCP published two backdoored versions to PyPI. The malicious code executed on import, harvested every credential it could find, moved laterally through Kubernetes clusters, installed persistent backdoors, and exfiltrated everything via encrypted archive to attacker-controlled infrastructure. The compromised versions were publicly available for about five hours.
Five hours. How many automated pipelines ran pip install litellm in five hours?
The entry point wasn’t LiteLLM itself. The attack started five days earlier with Trivy (a security scanner) whose GitHub Action was rewritten to point to a malicious release. The security tooling in LiteLLM’s own CI/CD pipeline handed over the PyPI publishing credentials. One unpinned dependency in a vulnerability scanner became the key to backdooring a library with tens of millions of monthly downloads that sits directly in front of your AI infrastructure. Researchers also documented this as one of the first cases of an AI agent used operationally in a supply chain attack. The attackers aren’t just targeting AI tools. They’re using AI to do the work.
This isn’t a threat model or a theoretical situation. This is the world we live in right now. If you don’t care about this right now, bookmark this page so you can hire me when it’s time to care.
The Vibe Coding Problem
From a user’s perspective, the timing could not be worse…but from a malicious actor’s, it could not be better. AI-assisted development tools have made it possible for someone with no formal (or informal) training to build a functional web application in an afternoon. The barrier to entry has all but vanished; it’s changing the industry whether we like it or not.
Buried in that productivity is another cost: clichéd as it is, we don’t know what we don’t know. Developers who have been at this for a while have internalized practices: pinning dependencies, reviewing changelogs, using external tools (like GitHub’s Dependabot) to audit their repositories, understanding postinstall hooks and why they matter. Best practices, like OSHA regulations, are written in blood (generally less literally when it comes to software development, but not always). Those of us with experience have all seen (or been) the one who caused a system failure, pushed to production on a Friday, or sent out 50,000 emails to community colleges because I foolishly assumed a staging server wouldn’t be connected to a real mail system…without checking first. An AI can’t teach you that, because no one has trained them on it.
Those who exclusively vibe code don’t have any of that experience or context. They have something far more dangerous: a tool that sounds confident, generates plausible-looking code at speed, and has no real-time awareness of which packages were compromised in the last 48 hours. When an AI coding assistant suggests npm install axios, it doesn’t (and can’t) know that yesterday’s release was delivering malware to every machine that ran the installation.
At its most naive, vibe coding is cargo-cult development, performing rituals without understanding what they do. “npm install” is just a spell you chant to make things work. When you don’t understand what the spell is doing you have no idea that you’ve even been cursed.
If you haven’t seen it, this scene from Army of Darkness captures the risk beautifully:
The Force Multiplier Works Both Ways
AI is a force multiplier. That’s the point. A sharp developer with good AI tooling can produce in a day what used to take a week. That’s useful leverage and can get you 80-90% of the way to the finish line. But a force multiplier doesn’t pick sides. It amplifies your good judgment and your bad judgment equally. It accelerates your best practices and your sloppiest ones. When your workflow has a blind spot, AI doesn’t fill it, it paves over the pothole with confident-sounding output and moves on. If you don’t know that you shouldn’t move on from that, or that you need to solve it first, you’ll fall behind (but you won’t notice it until it’s too late).
The vibe coding workflow is optimized for speed and minimal friction. That’s the value proposition, but security requires the opposite. You need to pause before adding a dependency and ask whether you need it, where it comes from, who maintains it, and when it was last looked at by someone who cares. Those are friction-generating habits, and they’re exactly what a tool tuned for fast, frictionless output won’t encourage. This is where you come in and take charge.
That’s not a flaw in the AI. It’s what happens when you treat AI as a substitute for engineering judgment rather than a tool to augment it.
Pay Now or Pay Later…it’s Your Choice
If you’re building software today and you don’t have a working mental model of the risks you face, you’re carrying a liability (probably many) you haven’t factored into your cost. Like most unpriced liabilities, it will show up at the worst possible moment.
The incident response bill for a compromised production environment: credential rotation, forensic analysis, customer notification, regulatory exposure (if you’re in a governed industry) makes whatever it would have cost to build secure dependency management into your workflow look like a rounding error. The developers who got hit by the axios compromise two days ago aren’t shipping features this week. They’re figuring out what was exfiltrated, rotating credentials, and explaining to their stakeholders why a package with 100 million weekly downloads turned out to be a RAT delivery vehicle.
The practices that reduce your exposure aren’t exotic or expensive. The tooling exists and keeps getting better. But none of it works if you don’t understand why it’s necessary or if you’re just casting spells and hoping they don’t backfire.
AI didn’t cause any of this, the attackers did. The same thing happens with human developers; AI just helps you build—and break—faster.
The sandworms are real. Walk like the Fremen, they know Arrakis and survive it.