Helm Chart Secrets: Why Kubernetes Secrets Aren't Secret (And What To Do)
Kubernetes Secrets are base64-encoded, stored as plaintext in etcd by default, readable by anyone with namespace read access, checked into git as part of Helm charts, and leaked to CI/CD pipeline logs. 'Secret' is a misleading name. A practical walkthrough of what's wrong, how attackers exploit it, and the production patterns that actually protect secrets in Kubernetes.
Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.
The naming is the first lie
We see this pattern show up on almost every engagement.
Kubernetes has a resource type called Secret. The name implies protection. The reality:
- Secrets are base64-encoded, not encrypted. Decoding is trivial.
- Stored plaintext in etcd by default. Anyone with etcd access has all secrets.
- Anyone with namespace read access can read them. Over-broad RBAC is common.
- Frequently committed to git. As part of Helm charts, Kustomize overlays, manifests.
- Leaked to CI/CD pipeline logs. When applied or templated.
- Exposed via kubectl describe pod including environment variable values.
- Not rotated automatically. Static forever until manual rotation.
Known in the Kubernetes community. The official docs even say "don't treat Secrets as secure by default." The problem is that developers reading Kubernetes tutorials often don't read the warnings, or assume that "Secret" does what the name suggests.
This post walks through the specific ways secrets leak in Kubernetes deployments, the attack patterns that exploit them. And the production patterns that work. Sealed Secrets, External Secrets Operator, SOPS, cloud-native secret integration, and more.
How Kubernetes Secrets work
The data model
A Kubernetes Secret looks like:
apiVersion: v1
kind: Secret
metadata:
name: database-credentials
namespace: production
type: Opaque
data:
password: cGFzc3dvcmQxMjM= # "password123" base64-encoded
username: YWRtaW4= # "admin" base64-encoded
cGFzc3dvcmQxMjM= is password123 with base64 encoding. There's no encryption. echo cGFzc3dvcmQxMjM= | base64 -d returns the original. This is the entire "security" of Secret data serialization.
Storage in etcd
By default, Kubernetes stores Secret data in etcd in plaintext. An attacker who accesses etcd directly (via compromised control plane node, compromised etcd backup, or exploited etcd vulnerability) gets all Secrets in cleartext.
Mitigation: Kubernetes supports encryption-at-rest for etcd. It's configurable via EncryptionConfiguration but must be explicitly enabled:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <32-byte key base64-encoded>
- identity: {}
Key rotation is manual and requires re-encryption. Managed Kubernetes services (EKS, GKE, AKS) increasingly enable etcd encryption by default. But self-hosted clusters frequently don't.
RBAC permissions
Read access to Secrets is controlled by RBAC. A Role or ClusterRole with get, list, or watch on secrets can read Secret contents.
Common over-permissive patterns (which we covered in our Kubernetes RBAC post):
cluster-adminClusterRole bound to ServiceAccountslist secretspermission across a namespace with multiple application secretsget secretswith overly broad resourceNames- Developers accidentally given permissions they don't need
Runtime exposure
Secrets can be exposed to pods via:
- Environment variables. Accessible via
kubectl describe podor/proc/to anyone with shell access/environ - Volume mounts. Files readable by the pod's processes
- ServiceAccount tokens. Mounted at
/var/run/secrets/kubernetes.io/serviceaccount/token
Environment variables are the most common and most leaky. Once a process has environment variables, they're accessible to:
- Any process that can read
/proc/(same user, typically root)/environ - Container escape scenarios (container logs, crash dumps, debug tooling)
- Kubernetes users with
pods/describepermission
Attack pattern 1: Secrets committed to git
The most common source of Kubernetes secret exposure. Helm charts, Kustomize overlays, and raw manifest files get committed to version control with embedded secrets.
Vulnerable patterns:
Pattern 1a: Direct secrets in Helm values
# values.yaml
database:
username: admin
password: MyRealPassword123! # committed to git
ApiKeys:
stripe: sk_live_abc123
sendgrid: SG.xyz789
The values.yaml file gets committed so that deployments are reproducible. The secrets get committed with it.
Pattern 1b: Secrets in templated manifests
# templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ include "app.fullname". }}
type: Opaque
stringData:
db-password: {{.Values.db.password }}
Template looks fine, but when rendered with a values file containing real secrets, the output gets deployed. And the values file is in git.
Pattern 1c: Encrypted-looking but base64 isn't encryption
data:
password: cGFzc3dvcmQxMjM=
Developers see base64-encoded data, assume it's encrypted, commit to git. Anyone can decode it.
Attack:
# Search GitHub for Kubernetes Secrets with common patterns
gh search code 'kind: Secret "stringData"'
gh search code 'sk_live_ apiKeys'
gh search code 'AWS_SECRET_ACCESS_KEY stringData'
# For any match, check if the secret is real (not sample/placeholder)
The fix:
- Never commit plaintext secrets or stringData in git
- Use
.gitignoreto exclude values files - Commit only
values.example.yamlwith placeholder values - Scan git history for committed secrets (trufflehog, gitleaks)
- Rotate any secrets that were ever committed
Attack pattern 2: CI/CD pipeline exposure
Kubernetes deployments typically run through CI/CD. Secrets flow through the pipeline:
- Environment variables in CI job
kubectl apply -fcommands that include rendered manifestshelm install --values production.yamlwith values containing secrets
Common leakage patterns:
Pattern 2a: CI logs dump secrets
# In CI:
helm template my-chart --values production.yaml # prints rendered YAML to logs
# Rendered YAML includes secret values in stringData
If CI logs are accessible (some CI systems default to public logs for open-source repos), secrets leak.
Pattern 2b: Artifacts retained
CI systems often retain build artifacts. Rendered Helm charts, kubectl diff outputs, deployment manifests. All potentially containing secrets.
Pattern 2c: Pipeline variables accessible
CI pipeline variables (GitHub Secrets, GitLab CI variables) are usually visible to anyone who can modify the pipeline configuration. In organizations with broad CI modification rights, this is a wider access circle than intended.
The fix:
- Never template full manifests in CI logs. Redact sensitive output
- Use
helm secretsor similar tools that inject secrets at apply time without exposing them - Tight control of CI pipeline modification. Who can change workflows that access secrets
- Artifact retention policies. Purge build artifacts quickly
- Log redaction. CI systems should redact patterns that look like secrets
Attack pattern 3: Over-privileged ServiceAccount
A pod runs with a ServiceAccount. That ServiceAccount has permissions. Common over-permission:
- ServiceAccount that only needs to read one specific Secret has
list secretson the whole namespace - ServiceAccount has cluster-wide
secretsread (should be namespace-scoped) - Multiple applications sharing the same ServiceAccount (shared permissions)
Pod compromise (via RCE in the application) means attacker has the ServiceAccount's full permissions, which means all Secrets that ServiceAccount can read.
Real finding: a startup's Node.js application had a SQL injection. Exploitation gave shell as the pod's user. The pod's ServiceAccount had get, list, watch on secrets in its namespace. Attacker read every secret in the namespace. Including cloud credentials, database passwords for adjacent databases, third-party API keys. Lateral movement from a single injection bug to cloud-wide compromise took 10 minutes.
The fix:
- Per-pod ServiceAccounts with minimum-necessary permissions
- Explicit Secret name restrictions via
resourceNames:
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["my-app-db-password"]
verbs: ["get"]
- Regular review of Secret read permissions
- Disable auto-mount of ServiceAccount token on pods that don't need to call Kubernetes API:
spec:
automountServiceAccountToken: false
Attack pattern 4: Secrets in environment variables
Environment variables leak to unexpected places:
kubectl describe podshows environment variable names (but not values for Secret-sourced vars by default, though values leak withkubectl get pod -o yamlfor ConfigMap-sourced ones)- Process table inspection (
ps auxe) shows environment to all users on the host - Debug tooling, APM (application performance monitoring), and error tracking services often capture environment variables
- Stack traces in logs often include environment details
Real finding: an application used Sentry for error tracking. Sentry's default configuration included environment variables in error reports. Database credentials in environment variables got posted to Sentry. Sentry's dashboard was accessible to the entire engineering team, including contractors. Sensitive credentials ended up visible to more people than the secrets management was intended for.
The fix:
- Prefer file-based secret mounts over environment variables where possible:
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
- Application reads secrets from files (standard pattern for databases, Vault-injected secrets)
- Configure APM/error tracking to exclude environment variables from reports
- Disable
kubectl describe podfor sensitive namespaces (requires custom RBAC)
Attack pattern 5: Secret types confusion
Kubernetes has different Secret types:
Opaque. Generic key-valuekubernetes.io/service-account-token. ServiceAccount tokenskubernetes.io/tls. TLS certificates and keyskubernetes.io/dockerconfigjson. Docker registry credentialskubernetes.io/basic-auth. Username/password
Each type has specific expected fields. Type confusion (using wrong type, or using Opaque for something that should be a typed Secret) creates issues:
- Docker registry credentials in
Opaquetype don't work with pods expectingkubernetes.io/dockerconfigjson - TLS certificates in
Opaquedon't integrate with Ingress controllers
Usually this breaks things than creating security issues. But the breakage leads to developers creating workaround configurations that are less secure than using proper types.
Attack pattern 6: Vault / secret manager integrations done wrong
Organizations often deploy HashiCorp Vault, AWS Secrets Manager, or similar. The integration patterns vary:
Vault Injector (sidecar)
Pods annotated with vault.hashicorp.com/agent-inject: true get a Vault agent sidecar that fetches secrets from Vault. Application reads secrets from a mounted volume.
Security considerations:
- Vault token handling. The pod needs a way to authenticate to Vault. Usually Kubernetes ServiceAccount tokens via Vault's Kubernetes auth method.
- Role mapping. Vault roles determine what secrets the pod can access. Over-broad roles = too much access.
- Sidecar attack surface. The sidecar itself can be a vulnerability (mapped in our separate Vault sidecar post).
External Secrets Operator (ESO)
ESO pulls secrets from external stores (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, Vault, etc.) and creates Kubernetes Secrets from them. Cluster still has Kubernetes Secrets, but they're synchronized from external store.
Security considerations:
- Cluster Secret still exists. Same RBAC caveats apply
- ESO itself has permissions to read from external stores (need to secure ESO's credentials)
- Rotation: when external store updates, ESO re-syncs
SOPS (Secrets OPerationS)
Encrypts Kubernetes manifests using KMS (AWS KMS, GCP KMS, Azure Key Vault, PGP, age). Encrypted manifests can be safely committed to git. At apply time, SOPS decrypts.
Security considerations:
- Requires KMS access. If KMS is breached, SOPS is breached
- Decryption happens at apply time. Whoever applies has KMS access
- Good pattern for GitOps workflows with encrypted git history
Cloud-native alternatives
AWS EKS has Pod Identity and IRSA (IAM Roles for Service Accounts) that let pods authenticate directly to AWS services without storing credentials in Kubernetes Secrets at all. Similar for GKE Workload Identity and AKS Managed Identities.
Benefit: credentials never exist as Kubernetes Secrets. Short-lived, automatically rotated, cloud IAM handles everything.
Recommended pattern for cloud-native Kubernetes deployments.
Attack pattern 7: Secret rotation gaps
Secrets that should rotate but don't:
- Database master passwords from 2020 still in use
- API keys that should rotate on team member changes. Not rotated
- Kubernetes ServiceAccount tokens (before Kubernetes 1.24 auto-rotation) retained indefinitely
- Certificate rotations skipped
The fix:
- Automated rotation where supported (AWS Secrets Manager + RDS, Vault dynamic secrets)
- Rotation schedule for static secrets (quarterly minimum)
- Rotation on events. Staff changes, suspected compromise
- Monitoring for stale secrets. Age metrics on secret creation times
Attack pattern 8: Helm hooks and lifecycle secrets
Helm has hooks. Operations that run at specific lifecycle events (pre-install, post-install, pre-delete, etc.). Hooks often involve one-time secrets (initial admin password, database bootstrapping credentials).
Common issue: one-time secrets for bootstrap that aren't deleted after bootstrap. An initial admin password for a web application gets set via Helm hook, is intended to be changed by the admin on first login. But often isn't. And the original value is in the Helm chart forever.
The fix:
- Clean up bootstrap secrets after use (Helm
pre-deletehooks, or manual cleanup procedures) - Document lifecycle secrets clearly
- Automate post-bootstrap cleanup where possible
Attack pattern 9: Dumping secrets via kubectl
Anyone with get, list, watch secrets in a namespace can dump all secret values:
# Simple enumeration
kubectl get secrets -n production -o yaml
# Extract all secret data
kubectl get secrets -n production -o json | jq '.items[].data' | base64 -d # decode each
If secrets are on etcd in cleartext and RBAC allows broad read access, this is the main exfiltration path.
The fix:
- RBAC limits on secrets access
- Audit logging of secret accesses
- Alerting on anomalous bulk secret retrieval
Attack pattern 10: Kubernetes dashboard / web UIs exposing secrets
Kubernetes dashboards (official Kubernetes Dashboard, Lens, Headlamp, Rancher UI, OpenShift Console) let users view Secret contents. If access to these dashboards is broader than intended, secrets are broadly visible.
Real finding: an organization's Rancher UI was accessible via VPN to the entire engineering team. The UI showed Secret contents by default. Any engineer could browse to any namespace and see database passwords, API keys, etc. The access was intended as "operational visibility" but included far more data than intended.
The fix:
- Dashboard access restricted to operations team
- RBAC enforced through dashboard (doesn't always work well. Many dashboards have their own auth layers)
- Consider disabling secret value display in dashboards
The hardening stack
For a production Kubernetes deployment handling meaningful secrets, the recommended stack:
Tier 1: table-stakes
- etcd encryption at rest enabled
- RBAC with least-privilege access to secrets
- No secrets in git. Either committed or in CI logs
- Secret rotation on a documented schedule
Tier 2: significant improvement
- Workload identity (IRSA, GKE Workload Identity, AKS Managed Identity) instead of Kubernetes Secrets for cloud service access
- External Secrets Operator or Vault for secrets that can't use workload identity
- SOPS for secrets that must be in git (ConfigMaps with sensitive config, rare legitimate cases)
- Secret access audit logging
Tier 3: high-assurance
- Short-lived dynamic secrets via Vault (credentials generated per-request, expire quickly)
- HSM-backed encryption keys for etcd and Vault
- Dedicated secrets management team overseeing rotation and access
- Pen-testing that includes secret exfiltration attempts
Migration guide: getting from git-committed secrets to a proper solution
Step 1: audit current state
# Find all Secret manifests in git
grep -r "kind: Secret" --include="*.yaml" --include="*.yml".
# Find hardcoded secrets in values files
grep -r "password:\\|api_key:\\|secret:\\|token:" --include="*.yaml".
# Scan git history for leaked secrets
gitleaks detect --source.
Step 2: choose strategy
Decision tree:
- Cloud services (AWS/GCP/Azure APIs): use workload identity (IRSA, etc.). No Kubernetes Secrets needed.
- External third-party services (Stripe, Twilio, etc.): use External Secrets Operator with cloud secret manager backend.
- Database credentials: use dynamic secrets via Vault or IAM database authentication where supported.
- Webhook signing secrets, JWT secrets: ESO with cloud secret manager.
- TLS certificates: cert-manager with proper issuer configuration, not manually-managed Secrets.
Step 3: migrate gradually
- One namespace or one application at a time
- Keep existing secrets during migration, remove after new system is proven
- Test thoroughly
- Document the new pattern
Step 4: clean up
- Remove secrets from git history (via BFG Repo-Cleaner or git filter-branch)
- Rotate all secrets that were ever committed (assume compromised)
- Purge old CI artifacts with secrets
Step 5: prevent regression
- Pre-commit hooks that reject commits with secret patterns
- CI checks for Secret manifests in git
- Documentation and training
- Regular audit
For different organization sizes
Small teams / startups
- Cloud-native where possible (IRSA, GKE Workload Identity)
- GitHub Secrets / GitLab CI/CD Variables for CI-level secrets
- Start without Vault. Complexity overhead usually not justified until specific requirements emerge
- Avoid committing any real secrets to git from day one
Mid-size companies
- External Secrets Operator with cloud secret manager (AWS Secrets Manager, GCP Secret Manager)
- Consider Vault when you've multiple non-cloud secret types
- Centralized secrets-access team reviewing rotation and access
Large enterprises
- HashiCorp Vault enterprise with dynamic secrets
- HSM-backed key management
- Formal secrets lifecycle management
- Regular auditing and pen-testing
- Integration with enterprise IAM
For Valtik clients
Valtik's Kubernetes security audits include secrets management review:
- Inventory of all Secret resources across namespaces
- RBAC audit for secret access
- Git repository scanning for committed secrets
- CI/CD pipeline review for secret leakage
- External secret manager integration review
- Workload identity configuration review
- Rotation policy review
If you run Kubernetes in production and haven't explicitly audited your secrets handling against these patterns, reach out via https://valtikstudios.com.
The honest summary
Kubernetes Secrets are a useful primitive with misleading naming. They're not secure by default. The hardening requires explicit attention and increasingly requires abandoning Kubernetes Secrets for workloads where they're replaceable with cloud-native alternatives.
The patterns in this post appear in every Kubernetes deployment we've audited. The remediation is tractable but requires commitment. Treat secrets with the rigor their sensitivity warrants.
Sources
Want us to check your Kubernetes setup?
Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.
