Nx Supply Chain Attack Abuses AI Tools to Steal Developer Credentials

Supply Chain Attack Overview
The maintainers of the Nx build system have warned users about a supply chain attack that injected malicious code into several Nx packages and auxiliary plugins on npm. The compromised packages were capable of scanning file systems, collecting credentials, and exfiltrating data to attacker-controlled GitHub repositories.
Nx is an open-source, AI-first build platform with over 3.5 million weekly downloads, widely used in large codebases.
Compromised Packages and Versions
The attack, which occurred on August 26, 2025, affected the following packages:
Nx core packages:
- nx: 20.9.0–21.8.0
- @nx/devkit: 20.9.0, 21.5.0
- @nx/enterprise-cloud: 3.2.0
- @nx/eslint: 21.5.0
- @nx/js: 20.9.0, 21.5.0
- @nx/key: 3.2.0
- @nx/node: 20.9.0, 21.5.0
- @nx/workspace: 20.9.0, 21.5.0
The malicious versions have been removed from npm, but users who installed them must assume compromise.
See Also: So, you want to be a hacker?
Offensive Security, Bug Bounty Courses
How the Attack Worked
The attack leveraged a vulnerable GitHub workflow added on August 21, 2025, which allowed command injection via pull request titles.
Key points:
- The pull_request_target trigger ran workflows with elevated privileges, including a GITHUB_TOKEN with read/write access.
- Malicious PRs targeted outdated branches with the vulnerable workflow.
- The attack triggered the publish.yml workflow, publishing malicious Nx versions and exfiltrating the npm token to an attacker-controlled webhook.
The rogue packages contained a postinstall script that:
- Scanned systems for text files, credentials, and
.gitconfig
files - Sent Base64-encoded data to attacker GitHub repositories named
s1ngularity-repository
- Modified
.zshrc
and.bashrc
to includesudo shutdown -h 0
, prompting for system passwords
AI Tools Abused in the Attack
Researchers observed that the malware leveraged AI CLI tools installed on developer systems, including:
- Claude Code, Google Gemini CLI, Amazon Q CLI
By using dangerous flags (--dangerously-skip-permissions
, --yolo
, --trust-all-tools
), attackers were able to steal filesystem contents and enumerate secrets through trusted AI tools, marking the first known case of such abuse.
Image Source: GitGuardian
Trending: Recon Tool: WaybackLister
Scope of the Impact
- Over 1,346 repositories with
s1ngularity-repository
have been detected. - 2,349 distinct secrets were exposed, mostly GitHub OAuth keys and personal access tokens, followed by cloud service credentials (Google AI, OpenAI, AWS, Anthropic Claude, PostgreSQL, Datadog).
- 33% of infected systems had at least one LLM client installed; 85% ran macOS.
A second wave, observed by Wiz on August 28, 2025, affected 190+ users/orgs and 3,000 repositories, with attackers turning private repositories public and renaming them in the s1ngularity-repository-#5letters#
pattern.
Recommended Mitigation
- Immediately stop using compromised Nx packages
- Rotate GitHub and npm credentials and tokens
- Inspect
.zshrc
and.bashrc
for unauthorized entries and remove them - Treat local AI coding assistants as privileged automation: restrict file/network access, review frequently, and avoid running with dangerous flags
Nx maintainers have responded by rotating credentials, auditing repositories, and requiring 2FA for publishing access.
Are u a security researcher? Or a company that writes articles about Cyber Security, Offensive Security (related to information security in general) that match with our specific audience and is worth sharing? If you want to express your idea in an article contact us here for a quote: [email protected]
Source: thehackernews.com