tanstack npm supply-chain compromise: 84 malicious package versions, a self-spreading worm, and a file-watcher wiper that triggers if you try to revoke your tokens
On May 11 2026 at 19:20 UTC the TanStack ecosystem on npm was compromised. 42 packages, 84 malicious versions, published in a six-minute window with valid Sigstore provenance. The attacker chained three GitHub Actions vulnerabilities — pull_request_target pwn-request, cache poisoning, and runner-process OIDC token extraction — to mint a valid npm publish token and ship malicious releases of @tanstack/react-router, @tanstack/start, @tanstack/solid-router, @tanstack/vue-router and dozens more. The payload harvests AWS / GCP / Kubernetes / Vault / GitHub / npm / SSH credentials and exfils over Session messenger (not HTTP — most egress filters won't catch it). It self-propagates by minting npm tokens for every other package the victim publishes, with forged Sigstore attestations. And it installs a file watcher on the host that detects API-key revocation attempts and triggers a destructive wiper payload — try to clean up from the compromised box and the box gets nuked. This post: full list of 42 affected packages with bad and safe versions, the deobfuscated payload capability matrix, all IOCs (file hashes, persistence locations, network indicators), and the correct air-gap-first remediation order (revoke from a different machine, image before you wipe, block *.getsession.org at the egress). Campaign: Mini Shai-Hulud — same crew that hit Mistral, UiPath, Squawk, Intercom, Lightning AI, SAP CAP. 169+ packages this wave.
Founder of Valtik Studios. Penetration tester. Based in Connecticut, serving US mid-market.
# tanstack npm supply-chain compromise: 84 malicious package versions, a self-spreading worm, and a file-watcher wiper that triggers if you try to revoke your tokens
on may 11, 2026 at 19:20 utc the tanstack ecosystem on npm was compromised. 42 packages, 84 malicious versions, published in two waves over a six-minute window. published with valid sigstore provenance. signed with stolen npm publish tokens, minted from a github actions oidc token that the attacker extracted from a runner process by chaining three github actions vulnerabilities.
the payload is the worst part. it steals every credential it can find. exfiltrates over session messenger so dns sinkholes don't catch it. self-propagates by minting npm tokens for every other package the victim publishes, republishing them with forged provenance. and it installs a file watcher on the host that detects api-key revocation attempts and triggers a destructive payload — try to clean up from the compromised box and the box gets nuked.
this writeup: what packages, what versions, what the malware actually does, the iocs, and the correct order of operations for remediation. the order matters because the wrong order can lose you the machine.
ghsa: ghsa-g7cv-rxg3-hmpx
official tanstack postmortem: tanner_linsley + team published may 12 utc
campaign: "mini shai-hulud" — same crew that hit mistral, uipath, squawk, intercom, lightning ai, sap cap (169+ packages this wave)
the timeline
- may 11 19:20:39 utc — first malicious version published (
@tanstack/router-core1.169.5) - 19:26:14 utc — last malicious version of wave-two published (six minutes of carnage)
- 19:26 utc — socket.dev's automated scanner flags the publishes
- 19:40 utc — stepsecurity researcher publishes the public disclosure thread
- 20:30 utc — tanstack team begins deprecating malicious versions and publishing patched releases
- 23:00 utc — github security advisory ghsa-g7cv-rxg3-hmpx published
- may 12 (overnight) — patched versions live on npm, malicious versions deprecated but tarballs are still in caches everywhere
if you ran npm install, pnpm install, or yarn install against an ^ or ~ semver range for any tanstack router/start/devtools package during the six-minute window or after, you may have pulled poison. if you have docker images that ran their npm install during that window, the poison is baked into the image layer.
how the attack worked
the attacker didn't phish tanner linsley. they didn't steal his laptop. they exploited the project's own github actions workflows.
three primitives chained:
pull_request_target"pwn request" — the tanstack/router repo had a workflow triggered onpull_request_targetthat checked out the prfork's code and ran scripts from it.pull_request_targetruns with the base repo's secrets, including write tokens. the attacker opened a pr from a fork (github.com/zblgg/configuration) whose contents could execute in the privileged base-repo context.- github actions cache poisoning — they planted a poisoned build artifact in the github actions cache from the fork-context job that subsequent base-context jobs would consume. cache keys cross the fork/base trust boundary in ways most maintainers don't think about.
- runner-process oidc token extraction — once they had code running in the privileged base context, they read the github oidc token directly out of the runner process memory. that oidc token is what github uses to mint short-lived credentials for publishing to npm. they exchanged it for a valid
npm publishtoken. they published.
the published packages had valid sigstore provenance attestations because the attacker held the runner's signing material at the moment of publish. provenance verified. trust assumptions blown.
spoofed commit identity on the malicious code: claude — impersonating the anthropic claude code bot to make code review pattern-match as "ai-generated, probably fine." it wasn't.
compromised accounts in the workflow chain: voicproducoes (id 269549300), zblgg (id 127806521).
what the malware does on your machine
dropped as router_init.js (2.3 mb, javascript-obfuscator: string-array rotation, hex identifier renaming, control-flow flattening). sometimes called router_runtime.js or tanstack_runner.js depending on stage.
deobfuscated capability matrix:
credential harvesting:
- aws — env keys, irsa, imdsv2 (
169.254.169.254withX-aws-ec2-metadata-token), ecs task metadata ($ECS_CONTAINER_METADATA_URI_V4), secrets manager and ssm parameter store enumeration - gcp — metadata server, attached service-account tokens
- kubernetes —
/var/run/secrets/kubernetes.io/serviceaccount/token, kubeconfig harvesting - hashicorp vault — env-discovery + intra-cluster probe at
vault.svc.cluster.local:8200 - npm —
~/.npmrc,$NPM_TOKEN, project-local.npmrc - github — personal access tokens,
$GITHUB_TOKEN,ghcli auth.json,~/.git-credentials, github oidc trust relationships - ssh — private keys from
~/.ssh/ - general — env vars matching
AWS_*,GH_*,TOKEN,KEY,SECRET,PASS,API_*
exfil channel:
- session messenger over oxen (not http, not dns, not slack webhook — a privacy-focused chat protocol that most enterprise egress filtering never thought to block)
- endpoints:
filev2.getsession.org,seed1.getsession.org,seed2.getsession.org,seed3.getsession.org
persistence:
- writes
router_runtime.js+setup.mjsinto.claude/directory - mutates
.claude/settings.jsonto add ahooksentry that re-executes the payload on tool use - drops
.vscode/tasks.jsonwith auto-run on workspace open - daemonizes via
spawn()withdetached: trueand__DAEMONIZED=1env guard so reinvocation no-ops
self-propagation (the wormy part):
- enumerates the victim's other published npm packages via the registry api authenticated with the just-harvested npm token
- republishes each one with the same payload bundled in
postinstall - mints new sigstore attestations because it still has the runner's signing context (when the victim is on ci)
- waits 30–90 seconds between publishes to evade rate-limit triggers
the wiper trip-wire (this is the part most writeups miss):
- installs a file watcher on the local machine — watches the user's home for changes to credential files (
~/.aws/credentials,~/.ssh/,~/.npmrc,~/.config/gh/hosts.yml,~/.kube/config, browser cookie stores) - watches process tables for invocations of common revocation cli tools —
aws iam delete-access-key,aws iam update-access-key,gh auth refresh,gh auth logout,npm token revoke,kubectl create token --rotate,vault token revoke,gcloud auth revoke - watches for direct browser navigation to revocation pages —
iam.aws.amazon.com/.../security_credentials,github.com/settings/tokens,npmjs.com/settings/*/tokens - on trigger: deletes ssh keys, kubeconfig, aws credentials, browser cookie stores, then runs
rm -rf ~/projects ~/code ~/repos ~/git ~/dev ~/src(configurable target list) and finally writes random bytes to/dev/sdX(if it can find a writable block device) before forcing reboot
the wiper is the reason the standard "rotate your tokens immediately" advice will brick your machine if you do it from the compromised host. it has to be done from somewhere else.
the affected packages and versions
42 packages, two malicious versions each, 84 total bad releases. patched version listed in the third column — pin to that or higher.
| package | bad versions | patched safe version |
|---|---|---|
| @tanstack/arktype-adapter | 1.166.12, 1.166.15 | 1.166.16 |
| @tanstack/eslint-plugin-router | 1.161.9, 1.161.12 | 1.161.13 |
| @tanstack/eslint-plugin-start | 0.0.4, 0.0.7 | 0.0.8 |
| @tanstack/history | 1.161.9, 1.161.12 | 1.161.13 |
| @tanstack/nitro-v2-vite-plugin | 1.154.12, 1.154.15 | 1.154.16 |
| @tanstack/react-router | 1.169.5, 1.169.8 | 1.169.9 |
| @tanstack/react-router-devtools | 1.166.16, 1.166.19 | 1.166.20 |
| @tanstack/react-router-ssr-query | 1.166.15, 1.166.18 | 1.166.19 |
| @tanstack/react-start | 1.167.68, 1.167.71 | 1.167.72 |
| @tanstack/react-start-client | 1.166.51, 1.166.54 | 1.166.55 |
| @tanstack/react-start-rsc | 0.0.47, 0.0.50 | 0.0.51 |
| @tanstack/react-start-server | 1.166.55, 1.166.58 | 1.166.59 |
| @tanstack/router-cli | 1.166.46, 1.166.49 | 1.166.50 |
| @tanstack/router-core | 1.169.5, 1.169.8 | 1.169.9 |
| @tanstack/router-devtools | 1.166.16, 1.166.19 | 1.166.20 |
| @tanstack/router-devtools-core | 1.167.6, 1.167.9 | 1.167.10 |
| @tanstack/router-generator | 1.166.45, 1.166.48 | 1.166.49 |
| @tanstack/router-plugin | 1.167.38, 1.167.41 | 1.167.42 |
| @tanstack/router-ssr-query-core | 1.168.3, 1.168.6 | 1.168.7 |
| @tanstack/router-utils | 1.161.11, 1.161.14 | 1.161.15 |
| @tanstack/router-vite-plugin | 1.166.53, 1.166.56 | 1.166.57 |
| @tanstack/solid-router | 1.169.5, 1.169.8 | 1.169.9 |
| @tanstack/solid-router-devtools | 1.166.16, 1.166.19 | 1.166.20 |
| @tanstack/solid-router-ssr-query | 1.166.15, 1.166.18 | 1.166.19 |
| @tanstack/solid-start | 1.167.65, 1.167.68 | 1.167.69 |
| @tanstack/solid-start-client | 1.166.50, 1.166.53 | 1.166.54 |
| @tanstack/solid-start-server | 1.166.54, 1.166.57 | 1.166.58 |
| @tanstack/start-client-core | 1.168.5, 1.168.8 | 1.168.9 |
| @tanstack/start-fn-stubs | 1.161.9, 1.161.12 | 1.161.13 |
| @tanstack/start-plugin-core | 1.169.23, 1.169.26 | 1.169.27 |
| @tanstack/start-server-core | 1.167.33, 1.167.36 | 1.167.37 |
| @tanstack/start-static-server-functions | 1.166.44, 1.166.47 | 1.166.48 |
| @tanstack/start-storage-context | 1.166.38, 1.166.41 | 1.166.42 |
| @tanstack/valibot-adapter | 1.166.12, 1.166.15 | 1.166.16 |
| @tanstack/virtual-file-routes | 1.161.10, 1.161.13 | 1.161.14 |
| @tanstack/vue-router | 1.169.5, 1.169.8 | 1.169.9 |
| @tanstack/vue-router-devtools | 1.166.16, 1.166.19 | 1.166.20 |
| @tanstack/vue-router-ssr-query | 1.166.15, 1.166.18 | 1.166.19 |
| @tanstack/vue-start | 1.167.61, 1.167.64 | 1.167.65 |
| @tanstack/vue-start-client | 1.166.46, 1.166.49 | 1.166.50 |
| @tanstack/vue-start-server | 1.166.50, 1.166.53 | 1.166.54 |
| @tanstack/zod-adapter | 1.166.12, 1.166.15 | 1.166.16 |
not affected (this wave): @tanstack/query, @tanstack/table, @tanstack/form, @tanstack/virtual, @tanstack/store and their react/vue/solid bindings. only the router / start / devtools families were hit.
iocs
files on disk:
router_init.js,router_runtime.js,tanstack_runner.jsanywhere undernode_modules/@tanstack/~/.claude/router_runtime.js(persistence)~/.claude/setup.mjs(persistence)- new
hooksblock in~/.claude/settings.jsonwith paths referencingrouter_runtime - new
.vscode/tasks.jsonwith auto-run task pointing at the runtime
hashes:
router_init.jssha256:ab4fcadaec49c03278063dd269ea5eef82d24f2124a8e15d7b90f2fa8601266ctanstack_runner.jssha256:2ec78d556d696e208927cc503d48e4b5eb56b31abc2870c2ed2e98d6be27fc96
package.json smell:
"optionalDependencies": { "@tanstack/setup": "github:tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c" }- presence of
postinstallscript in any@tanstack/*package (none of the legitimate packages use postinstall)
network:
- any outbound to
*.getsession.org - any outbound to oxen seed nodes
git history:
- commits authored by
claude— anthropic's real claude code bot uses a different commit identity; that string is the imposter signature
the correct remediation order (the wiper changes the playbook)
TL;DR for the panicking developer: if you installed during the 6-min window or after, do not touch your credentials from the affected box. air-gap the host first. revoke from a different machine (your phone, a colleague's laptop). image the dirty disk for forensics. then reinstall from scratch. detailed steps below.
there are three scenarios — figure out which one applies before you do anything:
scenario a — you have not run npm/pnpm/yarn install against tanstack code since may 11 19:20 utc
you're fine. pin to the patched versions before your next install. that's it. jump to the "clean install commands" section.
scenario b — you installed during the window, on a developer workstation, and the box has had internet access since
assume credential theft is complete. proceed in the order below — every step matters and you cannot skip ahead.
scenario c — you installed during the window on a ci runner
the ci runner is almost certainly compromised. assume any secret the runner had access to (npm publish tokens, github oidc trust → cloud, deployment ssh keys, deployment service-account jsons) is stolen. you also need to check whether the runner had time to mint new publish tokens and republish your own packages.
immediate ci actions:
- disable the runner now (suspend the github actions self-hosted runner registration, or remove the runner from the org). do not let any more jobs land on it.
- check npm publish history for every package your ci ever published — look for any version published in the window of may 11 19:20 utc through "now" that your team didn't deliberately publish. if you find one, deprecate it via
npm deprecateand publish a real next version.@ "compromised — see ghsa-g7cv-rxg3-hmpx" - rotate npm tokens at the org level — npmjs.com → settings → automation tokens → revoke all → mint fresh.
- rotate the github oidc trust on cloud providers (
aws sts assume-role-with-web-identitytrust policy in iam, gcp workload identity, azure federated credentials) — rotate the audience / subject conditions, or shorten the trust window aggressively. - reimage the ci runner. do not "clean" it.
now for the dev workstation playbook (scenario b — most readers).
step 1 — air-gap the suspected machine. now.
- pull the ethernet cable. turn off wifi. literally physical-layer disconnect.
- do not open a browser. do not run any cli tool. do not type any command that touches a credential.
- if it's a dev workstation, leave it powered on, screen unlocked if it is, don't reboot. you want the state preserved.
- if it's a ci runner, suspend the vm (don't power it off — you want memory state).
the wiper file watcher is local-only on the compromised host. as long as you don't trip its triggers and don't give it new network egress, you have time.
step 2 — revoke from a different machine.
from a known-clean host (your phone, a colleague's laptop, a fresh vm), open the web consoles and revoke in this order:
- npm tokens — npmjs.com → settings → access tokens → revoke all. then check published-packages history for unauthorized publishes in the may 11 19:20 utc → now window.
- github personal access tokens + oidc trust relationships — github.com/settings/tokens → revoke all that touched the compromised box. then settings → applications → installed github apps → review. then for any repos with github actions oidc → aws/gcp trust policies, rotate the trust policy assume-role conditions.
- aws — iam console → users → security credentials → deactivate then delete access keys. if you used aws sso, sign out everywhere and rotate the sso instance refresh. check cloudtrail for the window — look for
GetSessionToken,AssumeRole,GetCallerIdentityfrom unknown source ips. - gcp service account keys — console.cloud.google.com → iam → service accounts → keys → delete. check audit log for the window.
- hashicorp vault —
vault token revoke -selfis not safe to run from the compromised box. revoke from another vault-cli-equipped machine using a different operator credential. then rotate any approle secret-ids the compromised box may have unwrapped. - kubernetes service account tokens —
kubectl delete secret -nand let the sa controller mint a new one. for projected/bound tokens, rotate the audience or restart the workloads with new sa bindings. - ssh — assume every private key in
~/.ssh/on the compromised box is now public. rotate everyauthorized_keysentry that listed those keys. for git hosts, drop and re-add ssh keys in github/gitlab/bitbucket. - browser cookies and session tokens — sign out of every service that held a session cookie in the browser on the compromised box. for sso-fronted services, kill the sso session at the idp level (okta, entra, google workspace → users → reset sessions).
while doing this, expect the compromised box to be unreachable — you've air-gapped it. don't reconnect it to "double-check" something. assume nothing on it can be trusted.
step 3 — image the dirty machine before wiping.
if this is a workstation with real value to your team's forensics or a regulated environment, image the disk before you nuke it. boot from a usb-mounted live-linux iso, use dd or ddrescue to capture the entire disk to an external drive. label it, store it. you will want this later if there's any chain-of-custody or insurance claim.
if this is a ci runner, snapshot the vm from the hypervisor side (not from inside the vm). preserve it for at least 90 days.
step 4 — reimage from scratch.
do not "clean" the compromised host. don't rm -rf .claude and call it good. the wiper is one persistence mechanism — there will be others you haven't found. wipe the disk and reinstall the os.
step 5 — clean install of your project on a clean host.
# === DETECT (run on the clean host before touching anything) ===
# 1. look for the dropped payload files
find . -type f \( -name "router_init.js" -o -name "router_runtime.js" -o -name "tanstack_runner.js" \) 2>/dev/null
# 2. hash check the canonical filename
find . -name "router_init.js" -exec sha256sum {} \; 2>/dev/null | grep -i "ab4fcadaec49c03278063dd269ea5eef82d24f2124a8e15d7b90f2fa8601266c"
# 3. lockfile grep for the malicious git ref
grep -rE "tanstack/router#79ac49eedf774dd4b0cfa308722bc463cfe5885c|@tanstack/setup" . \
--include="package*.json" --include="*lock*" --include="*.yaml"
# 4. list installed tanstack versions and compare to the safe list above
npm ls --all 2>/dev/null | grep "@tanstack/"
# pnpm equivalent:
pnpm why @tanstack/react-router
# === CLEAN ===
# nuke node_modules and the lockfile
rm -rf node_modules package-lock.json pnpm-lock.yaml yarn.lock
# pin exact versions in package.json (use exact pins for tanstack packages,
# no ^ or ~ until you trust the recovery is complete)
# e.g. "@tanstack/react-router": "1.169.9"
# reinstall WITHOUT running postinstall scripts (defense in depth)
npm config set ignore-scripts true
npm install
# pnpm: pnpm install --ignore-scripts
# yarn: yarn install --ignore-scripts
# clean package-manager caches (poisoned tarballs may sit in cache)
npm cache clean --force
pnpm store prune 2>/dev/null
yarn cache clean 2>/dev/null
step 6 — block the exfil channel at the egress, network-wide.
# add to dns sinkhole / pi-hole / cloudflare zero trust:
# block any resolution of:
# filev2.getsession.org
# seed1.getsession.org
# seed2.getsession.org
# seed3.getsession.org
# *.getsession.org (safest — session messenger has no legit corp use case)
# or via /etc/hosts on workstations:
echo "0.0.0.0 filev2.getsession.org" | sudo tee -a /etc/hosts
echo "0.0.0.0 seed1.getsession.org" | sudo tee -a /etc/hosts
echo "0.0.0.0 seed2.getsession.org" | sudo tee -a /etc/hosts
echo "0.0.0.0 seed3.getsession.org" | sudo tee -a /etc/hosts
step 7 — audit ci pipelines for the workflow pattern that enabled this.
if you run github actions in your own repos, this is the moment to audit:
- any workflow that uses
pull_request_targetand checks out or executes code from${{ github.event.pull_request.head.sha }}is the exact pattern that compromised tanstack. either removepull_request_target(usepull_requestwhich won't have access to secrets), or strictly gate which paths the prfork code is allowed to execute. - any workflow that consumes github actions cache across the fork↔base trust boundary needs review.
- any workflow that mints npm publish tokens or cloud provider credentials from oidc inside a runner that also processes untrusted prfork content. the oidc token can be read from the runner's process memory if untrusted code lands on the same runner.
a quick sweep on your own org:
# clone every repo or use gh cli to list workflows touching pull_request_target
gh repo list <your-org> --limit 200 --json name,sshUrl --jq '.[].sshUrl' | while read r; do
d=$(basename "$r" .git)
git clone -q --depth 1 "$r" "/tmp/audit-$d" 2>/dev/null
grep -rln "pull_request_target" "/tmp/audit-$d/.github/workflows/" 2>/dev/null
rm -rf "/tmp/audit-$d"
done
every match is a workflow you need to read line by line.
the broader takeaways
provenance is not a substitute for tight workflow permissions. every package in this wave shipped with a valid sigstore attestation. provenance verified. it didn't help. the attestation chain says "this artifact was built in github actions ci, here is the cryptographic proof." when the attacker controls the ci, the attestation is honest about a build that the attacker controlled.
pull_request_target is the new ssrf. a feature that is technically documented as "this gives the prfork code access to base secrets" but that is misunderstood by every maintainer who hasn't actually read the github actions security model. socket and stepsecurity have been ringing this bell for two years. ringing it louder now.
self-propagation changes the math on npm supply-chain incidents. when one compromised maintainer can ricochet into every package they publish, and from there into every package those packages depend on transitively, the blast radius of a single token theft is the entire ecosystem this maintainer touches. mini shai-hulud has now hit 169 packages this campaign from a handful of initial entry points. the next one will be worse.
the wiper is the part that changes the response playbook. every supply-chain advisory ever published tells you to "rotate credentials immediately." that advice is correct for credential theft. it is wrong for credential theft that comes packaged with a destructive payload that watches for revocation activity. air-gap first. revoke from somewhere else. image before you wipe.
what we're doing at valtik
we run an external attack-surface scan against our own infra (and our clients') that flags any tanstack package version older than the patched releases above. if you ship react/vue/solid apps with router or start, you can run the same check locally:
# quick single-line audit across a workspace:
find . -name package.json -not -path '*/node_modules/*' -exec grep -lE '"@tanstack/(react|vue|solid)-(router|start|router-devtools)"' {} \; \
| xargs -I{} sh -c 'echo "--- {}"; jq -r ".dependencies, .devDependencies | to_entries[]? | select(.key | startswith(\"@tanstack/\")) | \"\(.key) \(.value)\"" "{}"'
cross-reference the version numbers against the safe-version column in the table above.
if you found bad versions in your tree, this post has the playbook. if you've already run npm install and your machine has been online for any meaningful amount of time since may 11 19:20 utc, follow the air-gap-first remediation order. don't revoke from the compromised box.
links:
- ghsa: https://github.com/TanStack/router/security/advisories/GHSA-g7cv-rxg3-hmpx
- socket.dev disclosure: https://socket.dev/blog/tanstack-npm-packages-compromised-mini-shai-hulud-supply-chain-attack
- stepsecurity writeup: https://www.stepsecurity.io/blog/mini-shai-hulud-is-back-a-self-spreading-supply-chain-attack-hits-the-npm-ecosystem
- aikido analysis: https://www.aikido.dev/blog/mini-shai-hulud-is-back-tanstack-compromised
- endor labs: https://www.endorlabs.com/learn/shai-hulud-compromises-the-tanstack-ecosystem-80-packages-compromised
- tanstack/router issue tracker: https://github.com/TanStack/router/issues/7383
if you need an external assessment that finds this kind of dependency exposure before it bites you — and the ci-workflow patterns that enable it — that's what we do. contact us.
Want us to check your npm / TanStack ecosystem setup?
Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.
