Mini Shai-Hulud: the SAP npm worm that runs before `npm install` finishes
Four SAP-published npm packages (mbt, @cap-js/sqlite, @cap-js/postgres, plus a fourth pending disclosure) got poisoned with a worm that runs in the preinstall hook — meaning malicious code fires before npm install even resolves the dependency tree. Steals AWS / GCP / Azure credentials, GitHub tokens, Cursor and Claude Code auth, and every .env in the working directory. Persists in the local npm cache so future installs of unrelated packages re-trigger the payload. Here's how to detect, contain, and finally fix the npm threat model with ignore-scripts=true.
Founder of Valtik Studios. Penetration tester. Based in Connecticut, serving US mid-market.
# Mini Shai-Hulud: the SAP npm worm that runs before npm install finishes
Three weeks ago I shipped npm-postinstall-audit because every supply-chain attack I'd watched in 2024 and 2025 used a lifecycle script and npm audit was useless against any of them. Today, May 4, the *exact* attack the tool was built for landed on the SAP namespace. Four packages on the @cap-js/* org and the standalone mbt tool got poisoned with a worm that fires on preinstall — meaning the malicious code runs before npm install even finishes resolving the dependency tree, before the package shows up in node_modules, and before any human gets a chance to click on a suspicious file.
The worm exfiltrates the dev machine's cloud credentials, AI coding tool tokens, GitHub auth, and every .env it can find — then writes itself into the local npm cache so future installs of unrelated packages re-trigger the payload. CybersecurityNews is calling it "Mini Shai-Hulud," after the original Shai-Hulud worm from September 2024 that did the same thing on a smaller package set.
I built the tool. The attack is here. Run the tool. Then read the rest of this for the why.
The packages
Per CybersecurityNews and the npm registry's own removed-package log, the four confirmed poisoned releases are:
mbt— the SAP Multitarget Application Build Tool. Used by every Cloud Application Programming model project to package and deploy. ~80K weekly downloads.@cap-js/sqlite— the SQLite adapter for SAP's CAP framework. ~120K weekly downloads.@cap-js/postgres— the Postgres adapter for SAP's CAP framework. ~95K weekly downloads.- A fourth package that the SAP advisory has declined to name pending coordinated disclosure. Multiple researchers have publicly speculated it is
@cap-js/asyncapior@cap-js/cds-types. Treat both as suspect until SAP confirms.
The compromise window per npm registry timestamps is approximately 14 hours — from a publish event around 03:00 UTC May 4 to the npm Trust & Safety takedown around 17:00 UTC. If you ran npm install, pnpm install, yarn install, or bun install against any project that pulls these packages — even transitively, even as an indirect dependency of a different SAP-adjacent library — within that window, your dev machine should be considered compromised.
What the worm does
The malicious payload is small (~3KB minified) and lives in a preinstall lifecycle script. The flow:
- Pre-flight check. Detect platform. The worm has separate code paths for macOS, Linux, and Windows. Skip if the environment looks like CI (
process.env.CI,GITHUB_ACTIONS, presence of/proc/self/cgroupcontainingdocker) — except, importantly, the CI check is naive enough that any developer running locally inside a container is *not* protected. The worm fires on dev laptops first. - Credential harvest. Read every file in this list that exists:
~/.aws/credentials and ~/.aws/config
- ~/.config/gcloud/credentials.db and ~/.config/gcloud/application_default_credentials.json
- ~/.azure/ (the entire directory)
- ~/.config/cursor/User/globalStorage/cursor.cursorAuth/ (Cursor's credential store)
- ~/.config/Claude/ and ~/.claude.json (Claude Desktop and Claude Code auth)
- ~/.cursor/auth and ~/.cursor/mcp.json
- ~/.kube/config
- ~/.docker/config.json
- ~/.npmrc (npm publish tokens)
- ~/.gitconfig plus ~/.config/gh/hosts.yml (GitHub CLI auth)
- ~/.ssh/id_rsa, ~/.ssh/id_ed25519, ~/.ssh/id_ecdsa
- Working-directory env scrape. Recursively walk the current
cwd, find every.env,.env.local,.env.development,.env.production, and any file matching*.env*. Read up to 64KB of each. - GitHub token discovery. Parse the global npm config for any
_authTokenand theghCLI's host config for the personal access token. Try agh auth statusshellout to capture the active token if neither config has it. - Exfiltration. Pack everything into a single base64 blob and POST it to one of three Cloudflare Workers endpoints (the worm rotates between them). The endpoints had
*.workers.devsubdomains chosen to look like legitimate npm telemetry — names likenpm-stats-collectorandpackage-analytics-v2. All three were in Cloudflare's takedown queue by 18:00 UTC May 4, but a sample of the harvested data had already been observed for sale on a Russian-language credential-broker forum 90 minutes earlier. Whatever was caught is already someone's inventory. - Self-propagation. Write the same payload as a
preinstallhook into every package in the local npm cache (~/.npm/_cacache/). The nextnpm installof *any* package that pulls a fresh tarball from cache re-fires the worm. This is the "mini Shai-Hulud" hallmark — the original Shai-Hulud worm propagated by re-publishing to npm with stolen creds, but Mini gives up on registry propagation and just persists locally on the dev machine.
The persistence step is what makes this nasty. You can npm uninstall mbt, you can purge node_modules, you can rm -rf the project — and the next time you install anything from a different project, the cached payload runs again. Cleanup requires nuking the npm cache directory entirely.
Why preinstall is the cancer of the npm threat model
Every npm-style ecosystem (pnpm, yarn, bun) inherits the same mistake: when you install a package, the package runs code on your machine. preinstall, install, postinstall, prepublish, prepare — five distinct hook points where a malicious package can execute before you've even seen its file tree. The decade-old justification is "but native modules need to compile" (think node-gyp, bcrypt, sharp). That justification covers maybe 200 legitimate packages out of the 3 million on npm.
The other 2,999,800 don't need lifecycle scripts. Most of them have one because the maintainer copy-pasted a package.json from a tutorial that included an npm run build postinstall. The attack surface is enormous and the legitimate use is microscopic.
The fix is well-known and easy:
npm config set ignore-scripts true
Add that to your global ~/.npmrc. After that, npm install becomes a pure file-extract operation — no hooks fire. When you genuinely need a script (rare), run it explicitly:
npm install --foreground-scripts <package>
Or for one-off allowlists, use the per-project .npmrc to disable globally and re-enable just for the one dependency you trust:
echo "ignore-scripts=true" >> .npmrc
npm install
# then if you need a build hook:
npm rebuild <package> --foreground-scripts
For pnpm, the equivalent is enable-pre-post-scripts=false (it's already the default in recent versions, which is the entire reason senior devs are migrating off npm). For yarn, it's enableScripts: false in .yarnrc.yml. For bun, --no-postinstall flag — though Bun's stance is "we run them but sandboxed," which I trust about as far as I can throw their CEO.
Detection: did you run a poisoned install in the last 14 hours?
If you npm install-ed any of the affected packages directly OR transitively (e.g. you ran npm install on a SAP CAP project, which pulled @cap-js/sqlite automatically), assume compromise and run all four checks below.
1. Look for the worm's exfil endpoints in your network history
# macOS:
log show --predicate 'process == "node"' --info --last 1d | grep -E 'workers\.dev'
# Linux (if you have auditd):
ausearch -k network_dns | grep workers.dev
# Universal: pi-hole or your router's DNS log
grep -E '(npm-stats-collector|package-analytics-v2)\.workers\.dev' /var/log/dnsmasq.log
Any hit is dispositive. Your dev machine made a request to a known C2.
2. Check the npm cache for the payload
find ~/.npm/_cacache -name 'package.json' -newer ~/.npmrc 2>/dev/null \
| xargs grep -l 'preinstall' 2>/dev/null \
| head
Any cache entries with a preinstall hook that you didn't expect — that's the persistence. Inspect each one. The malicious version will have a script that base64-decodes a blob and evals it.
3. Run my npm-postinstall-audit tool against your project
I shipped this in April specifically to detect this attack class. From the original blog post — but the short version:
npx npm-postinstall-audit ./
It parses package-lock.json (or pnpm-lock.yaml or yarn.lock), pulls every package's package.json from the npm registry, and flags any with lifecycle scripts that match ten attack patterns (network calls in preinstall, base64-eval, child_process spawns, env scraping, etc.). Mini Shai-Hulud's preinstall matches three of the ten patterns. The tool fires hard.
4. Audit credentials known to be in the worm's harvest list
Even if detection above comes back clean, the cost of a false negative is "everything you've ever logged into is leaked." The cheap response is rotation:
- AWS access keys (
aws iam create-access-keythen delete the old one) - Google Cloud (
gcloud auth loginagain, revoke the old refresh token) - Azure (
az loginthenaz ad sp credential reset) - GitHub PAT (revoke at github.com/settings/tokens, regenerate)
- Cursor / Claude Code OAuth (sign out + sign back in)
- npm publish tokens (
npm token revokethennpm token create) - Every
.envyou had open in the last 24 hours — assume the contents are out - SSH keys (rotate
~/.ssh/id_*and updateauthorized_keyson every box)
This is a 30-minute job if you have a password manager. Two hours if you don't. Zero hours if you're tired and tell yourself "probably nothing happened" — and that's the cost of doing nothing, plus a ransomware bill in eight months.
What SAP and npm should do (but won't)
The fix at the registry level is mandatory 2FA on publish for every package over 10,000 weekly downloads, plus signed publishing with a hardware token for org-namespaced packages like @cap-js/*. npm has had npm publish --otp for years; it's optional. SAP, with a CAP framework that runs in production at thousands of enterprise customers, should be requiring it on every publish. They aren't.
The fix at the consumer level is ignore-scripts everywhere. The npm team has flirted with making it the default for years and never pulled the trigger because of the 200 legitimate native-build packages and the migration pain. pnpm shipped it as default in late 2024. The contrast is the entire reason supply-chain security people will tell you to use pnpm if you have a choice.
The fix at the SAP customer level is read-only mirrors. Run an internal Verdaccio or JFrog Artifactory in front of npm, hold every package for 72 hours before promoting to your org's allowed list. By the time a poisoned package would propagate to your devs, it would have already been pulled from the public registry. Many enterprises do this for production but allow free-form npm install on dev machines. That's the gap Mini Shai-Hulud walked through.
Tre's call
This isn't the last preinstall worm. It's not even the second-to-last. Mini Shai-Hulud is the third major lifecycle-script worm of the past 18 months — event-stream (2018), original Shai-Hulud (Sept 2024), and now Mini. They keep working because the npm threat model still trusts package authors to not be malicious, and that trust has been earned exactly zero times.
Set ignore-scripts=true globally tonight. Run npm-postinstall-audit against every project before you npm install from now on. Migrate to pnpm if your team can. And rotate every credential that touched any machine that ran npm install against an SAP-CAP project in the last day, because the cost of being wrong is a year of incident-response work and the cost of rotating preemptively is a coffee.
If you want a second set of eyes on a build pipeline you're worried got swept up — or if you maintain an org-namespaced npm package and want a publish-pipeline audit before the next attacker pivots to your namespace — Valtik runs both. Email phil@valtikstudios.com with "supply chain audit" in the subject. We'll have a scoped response in 24 hours.
Patch the registry mindset, not just the package.
Want us to check your npm setup?
Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.
