Valtik Studios
Back to blog
Public Companyhigh2026-02-0910 min

The 10 Kubernetes RBAC Misconfigurations We Find on Every Cluster Audit

Kubernetes RBAC is the primary access-control mechanism for every production cluster. And it's misconfigured on every single cluster we've audited. The 10 patterns we find every time, the exploitation paths each enables, and the tightening rules that stop them.

TT
Tre Trebucchi·Founder, Valtik Studios. Penetration Tester

Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.

Why Kubernetes RBAC is always broken

Every Kubernetes cluster audit starts the same way. We get read-only access to the cluster, enumerate ClusterRoles, Roles, RoleBindings, and ClusterRoleBindings. And within the first hour we've found at least three ServiceAccounts that can trivially escalate to cluster-admin if the pod they're bound to gets compromised.

This isn't because Kubernetes administrators are bad at their jobs. It's because Kubernetes RBAC was designed with a different threat model than most production workloads face. The defaults are permissive. The tooling generates verb lists that include more than what's needed. The examples in documentation leak over-privileges. And the day-to-day operational pressure is always "give the service account what it needs to work". Which becomes "give it everything" when debugging gets hard.

This post walks through the ten RBAC misconfigurations we find most frequently on production Kubernetes clusters, what each one means in practice. And the specific hardening patterns that eliminate them.

If you run Kubernetes in production, at least half of these apply to you. Most of our clients find eight or more on their first audit.

1. Wildcard verbs on core resources

The finding: a Role or ClusterRole grants * (all verbs) on pods, secrets, configmaps, deployments, or other core resources.

# BAD

rules:

  • apiGroups: [""]
resources: ["secrets"]

verbs: ["*"]

Why it's dangerous: * includes verbs you almost never need, like deletecollection (delete all at once), proxy (tunnel traffic through API server), and patch (subtle modifications that bypass validation hooks).

The fix: explicitly enumerate only the verbs needed. For most workloads:

# GOOD

rules:

  • apiGroups: [""]
resources: ["secrets"]

resourceNames: ["my-app-specific-secret"]

verbs: ["get", "list"]

Note the resourceNames constraint. If a ServiceAccount only needs one specific secret, scope it to that secret than all secrets in the namespace.

How we audit: search for any ClusterRole or Role with verbs: ["*"] and list the ServiceAccounts bound to them. Every one is a finding.

2. cluster-admin bound to ServiceAccounts

The finding: a ServiceAccount has the cluster-admin ClusterRole bound to it, usually via a ClusterRoleBinding.

# BAD. Seen in about 40% of audited clusters

kind: ClusterRoleBinding

metadata:

name: my-app-deployer

roleRef:

kind: ClusterRole

name: cluster-admin

subjects:

  • kind: ServiceAccount
name: my-app

namespace: default

Why it's dangerous: any pod running under that ServiceAccount now has cluster-admin privileges. A single container escape or application-layer RCE becomes full cluster takeover.

This pattern typically originates from development. Someone got tired of debugging permission errors and gave the ServiceAccount cluster-admin "temporarily." Temporarily is forever in most shops.

The fix: revoke cluster-admin bindings to ServiceAccounts. Replace with purpose-specific Roles. Use RBAC audit logging to find what verbs the ServiceAccount uses in production and generate a minimal role from that activity.

How we audit:

kubectl get clusterrolebindings -o json \

| jq '.items[] | select(.roleRef.name == "cluster-admin") |.subjects'

Every ServiceAccount in the output is a finding.

3. system:masters group membership

The finding: a user certificate or service identity is bound into the system:masters group.

Why it's dangerous: system:masters is hardcoded in the Kubernetes API server to bypass RBAC entirely. If you can obtain a certificate with that group, you've full admin access that isn't revocable via normal RBAC cleanup. You have to rotate the cluster CA or explicitly block the specific certificate.

This typically appears when someone copies the cluster admin kubeconfig to a shared location or CI/CD runner without realizing it carries system:masters.

The fix: generate separate kubeconfig files for each use case with specific, limited Groups. Never use the built-in cluster admin kubeconfig for anything other than break-glass scenarios.

How we audit: look for certificates issued with O=system:masters in CN/SAN. Check kubeconfig files on CI/CD runners and shared systems.

4. Overly broad namespace access via list/watch on secrets

The finding: a Role grants list or watch on secrets in a namespace, giving the ServiceAccount the ability to read every secret in the namespace.

Why it's dangerous: list on secrets returns the secret contents, not names. A pod with list secrets in a shared namespace can read every other application's credentials in that namespace. Database passwords, API keys, JWT signing secrets.

A particularly common finding in Helm-chart-deployed applications that request broad read access "because the chart might need to discover secrets."

The fix: grant only get with explicit resourceNames:

rules:
  • apiGroups: [""]
resources: ["secrets"]

resourceNames: ["app-database-password", "app-jwt-secret"]

verbs: ["get"]

If you truly need to discover secrets dynamically, isolate the application to its own namespace where only its secrets live.

How we audit: enumerate all Roles and ClusterRoles with list or watch on secrets. Cross-reference with ServiceAccounts to find which pods have that access. Each is a finding.

5. escalate and bind verbs on RBAC resources

The finding: a ServiceAccount has escalate or bind verbs on roles or clusterroles.

Why it's dangerous: these verbs allow a ServiceAccount to create new Roles with privileges higher than its own, or bind existing Roles to other identities. It's the RBAC equivalent of "sudo without a password." A pod with these permissions can escalate to cluster-admin even if it doesn't start with admin privileges.

Common in CI/CD pipelines where the deploy service account needs to create RoleBindings for new applications. The over-permission is usually "we gave it escalate so it could handle any new role it needs to create."

The fix: remove escalate and bind unless absolutely required. If required, scope them with resourceNames to specific roles the service needs to manipulate.

How we audit:

kubectl get clusterroles,roles --all-namespaces -o json \

| jq '.items[] | select(.rules[]? |.verbs[]? | test("^(escalate|bind)$")) |.metadata.name'

6. ServiceAccount token auto-mounted in pods that don't need it

The finding: pods run with the default ServiceAccount token auto-mounted, even though the pod doesn't call the Kubernetes API.

Why it's dangerous: an attacker who achieves RCE in the container can read /var/run/secrets/kubernetes.io/serviceaccount/token and use it to talk to the Kubernetes API. Even if the ServiceAccount has minimal permissions, the token is an authentication credential that can probe cluster state (kubectl auth can-i --list) and identify escalation paths.

The fix: set automountServiceAccountToken: false on pods that don't need to call the Kubernetes API:

spec:

automountServiceAccountToken: false

Or at the ServiceAccount level:

kind: ServiceAccount

metadata:

name: my-app

automountServiceAccountToken: false

Most application pods don't need to call the Kubernetes API at all. The only common exceptions are sidecars (service mesh, logging agents) and controllers/operators.

How we audit: check every deployment spec for automountServiceAccountToken. Default is true, which is almost always wrong for application workloads.

7. PodSecurityPolicy / Pod Security Admission bypass via use verb

The finding: ServiceAccounts have use verb on podsecuritypolicies/privileged (legacy) or are labeled to use the privileged Pod Security Admission profile.

Why it's dangerous: privileged pods can:

  • Run as root
  • Mount the host filesystem
  • Access host devices
  • Use privileged capabilities (SYS_ADMIN, SYS_PTRACE)
  • Share the host network namespace
  • Bind to privileged ports

Any of these give a container escape a much easier path to the host. Host access from a container escape means full node compromise. And from there, in most cluster configurations, kubelet credential extraction → cluster compromise.

The fix: use Pod Security Admission's restricted profile by default for all namespaces. Reserve privileged for specific infrastructure namespaces (kube-system, logging, monitoring) with strict RBAC controls on who can deploy there.

How we audit: enumerate namespaces with pod-security.kubernetes.io/enforce: privileged or baseline. Every non-infra namespace with these should be upgraded to restricted.

8. impersonate verb granted broadly

The finding: a ServiceAccount has the impersonate verb on users, groups, or service accounts.

Why it's dangerous: impersonate allows authentication as another identity. An impersonate users permission lets the ServiceAccount act as any user, including admins. This is an RBAC backdoor.

Common in admission webhooks, audit tools. And monitoring systems where the developer needed the tool to query resources "as the user who triggered it."

The fix: if you need impersonation, scope it tightly with resourceNames and never grant it on the catch-all users/groups resources. Most use cases can be re-architected to avoid impersonation entirely.

9. Aggregated ClusterRoles silently expanding privileges

The finding: custom ClusterRoles use aggregationRule to inherit rules from other ClusterRoles via label matching.

Why it's dangerous: aggregated ClusterRoles automatically gain permissions when a new ClusterRole is added to the cluster with the matching aggregation label. A seemingly-scoped role today can inherit unexpected privileges tomorrow when someone installs a new operator or Helm chart.

Example from real audits: a custom developer role had aggregation labels that matched against every installed operator. Installing the Prometheus operator added full access to Prometheus resources. Installing cert-manager added cluster-wide certificate management. Nobody updated the developer role. It grew capabilities silently.

The fix: avoid aggregationRule for custom roles. Explicitly enumerate rules. If you must aggregate, use highly-specific label selectors that won't accidentally match third-party operator roles.

How we audit:

kubectl get clusterroles -o json \

| jq '.items[] | select(.aggregationRule) | {name:.metadata.name, aggregationRule:.aggregationRule}'

Review each result. Any catch-all label selector is a finding.

10. API server audit logging disabled or wrong

The finding: the API server isn't configured with comprehensive audit logging, or audit logs are written to local disk without being shipped off-node.

Why it's dangerous: without audit logs, you can't:

  • Detect privilege escalation attempts
  • Forensically reconstruct an incident
  • Demonstrate compliance (SOC 2, ISO 27001, PCI DSS)
  • Investigate suspicious ServiceAccount activity

This isn't strictly an RBAC misconfiguration, but it's the one that makes every other RBAC failure invisible.

The fix: configure API server audit policy with at least:

  • RequestResponse logging for sensitive resources (secrets, RBAC resources, CertificateSigningRequests)
  • Request logging for all other resources
  • Ship audit logs to a central SIEM (Splunk, Datadog, Elastic) with long retention
  • Alert on specific RBAC patterns: create on clusterrolebindings, any activity against system:masters, any escalate verb usage

How we audit: check API server startup flags for --audit-policy-file and --audit-log-path. Verify the policy covers sensitive resources. Confirm logs are shipped off the cluster.

Putting it together: a hardening checklist

If you're trying to audit your own cluster, work through this checklist in order:

RBAC resource enumeration (30 minutes)

# All ClusterRoleBindings to cluster-admin

kubectl get clusterrolebindings -o json \

| jq '.items[] | select(.roleRef.name == "cluster-admin") | {name:.metadata.name, subjects:.subjects}'

# ServiceAccounts with cluster-admin

kubectl get clusterrolebindings -o json \

| jq '.items[] | select(.roleRef.name == "cluster-admin") |.subjects[] | select(.kind == "ServiceAccount")'

# All ClusterRoles with wildcard verbs

kubectl get clusterroles -o json \

| jq '.items[] | select(.rules[]?.verbs[]? == "*") |.metadata.name'

# All roles granting list/watch on secrets

kubectl get clusterroles,roles --all-namespaces -o json \

| jq '.items[] | select(.rules[]? | (.resources[]? == "secrets") and (.verbs[]? | test("^(list|watch)$"))) | {name:.metadata.name, namespace:.metadata.namespace}'

# Roles with escalate/bind

kubectl get clusterroles,roles --all-namespaces -o json \

| jq '.items[] | select(.rules[]?.verbs[]? | test("^(escalate|bind|impersonate)$")) | {name:.metadata.name, namespace:.metadata.namespace}'

Pod security posture (15 minutes)

# Namespaces missing pod security admission labels

kubectl get namespaces -o json \

| jq '.items[] | select(.metadata.labels["pod-security.kubernetes.io/enforce"] == null) |.metadata.name'

# Privileged pods (excluding kube-system and known-infra namespaces)

kubectl get pods --all-namespaces -o json \

| jq '.items[] | select(.spec.containers[]?.securityContext?.privileged == true) | {ns:.metadata.namespace, name:.metadata.name}'

ServiceAccount token auto-mount (10 minutes)

# Deployments that auto-mount the SA token (default true)

kubectl get deployments --all-namespaces -o json \

| jq '.items[] | select(.spec.template.spec.automountServiceAccountToken!= false) | {ns:.metadata.namespace, name:.metadata.name}'

Compare findings to the top-10 list

Each finding from the enumeration above maps to one of the top 10. Prioritize fixes based on blast radius:

  • Tier 1 (fix immediately): cluster-admin on ServiceAccounts, system:masters groups, wildcard verbs on secrets, escalate/bind verbs
  • Tier 2 (fix this sprint): list/watch on secrets, privileged pods, impersonate verbs, aggregated ClusterRoles
  • Tier 3 (fix this quarter): auto-mounted tokens on non-API pods, audit logging

Tooling that helps

The manual enumeration above works but doesn't scale to large clusters or multi-cluster environments. Tools we recommend:

  • [rbac-lookup](https://github.com/FairwindsOps/rbac-lookup). CLI for answering "what does this user/SA have access to" questions
  • [kubectl-who-can](https://github.com/aquasecurity/kubectl-who-can). Inverse: "who can do X?"
  • [Polaris](https://github.com/FairwindsOps/polaris). Opinionated cluster policy checker including RBAC findings
  • [kube-bench](https://github.com/aquasecurity/kube-bench). CIS Kubernetes Benchmark checker
  • [KICS](https://kics.io/) / [Checkov](https://www.checkov.io/) / [tfsec](https://aquasecurity.github.io/tfsec/). IaC scanners that catch RBAC misconfigs in your manifests before they hit the cluster
  • [Pixie](https://px.dev/) or commercial runtime-security tools. Detect anomalous RBAC activity in production

What this means for Valtik clients

Every Kubernetes environment we've audited has multiple entries from this list. The question isn't whether your cluster has these misconfigurations. It's how many.

Valtik's Kubernetes security audits include the enumeration above plus container image scanning, network policy review, admission controller configuration, secrets management review. And kubelet/kube-proxy configuration. If you're running Kubernetes in production and haven't had an independent RBAC audit in the last six months, you've at least three of these ten. And probably more.

If you're responsible for platform security at an organization running Kubernetes at scale, reach out via https://valtikstudios.com. Audit pricing scales with cluster count and service mesh complexity. And we can usually produce a findings-and-recommendations report within two weeks of kickoff.

Sources

  1. [Kubernetes RBAC Documentation. Kubernetes.io](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
  2. [CIS Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes)
  3. [Kubernetes Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/)
  4. [OWASP Kubernetes Top 10](https://owasp.org/www-project-kubernetes-top-ten/)
  5. [RedHat Kubernetes Security Best Practices](https://www.redhat.com/en/topics/containers/kubernetes-security)
  6. [NSA / CISA Kubernetes Hardening Guide](https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-216a)
  7. [Rory McCune. Cloud Native RBAC Security](https://raesene.github.io/)
  8. [Aqua Security Kubernetes RBAC Analysis](https://www.aquasec.com/cloud-native-academy/kubernetes-in-production/kubernetes-rbac-best-practices/)
  9. [FairwindsOps rbac-lookup](https://github.com/FairwindsOps/rbac-lookup)
  10. [Kubernetes Audit Log Reference](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/)
kubernetes securityrbaccloud securitypenetration testingcontainer securitydevsecopsvulnerability assessmentapplication securityresearch

Want us to check your Public Company setup?

Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.

Get new research in your inbox
No spam. No newsletter filler. Only new posts as they publish.