Valtik Studios
Back to blog
Container SecurityhighUpdated 2026-04-1730 min

Container Security 2026: The Complete Guide from Image Build to Runtime

Container security has failure modes that don't exist in traditional infrastructure. This is the complete 2026 container security guide. Six failure mode categories. Image security from build to runtime. Registry security. Runtime protection (Falco, Tetragon, commercial). Integration with Kubernetes cluster security. Specific production attack patterns. 10 fastest wins.

TT
Tre Trebucchi·Founder, Valtik Studios. Penetration Tester

Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.

Container security has specific failure modes that don't exist elsewhere

I've run security engagements on infrastructure that didn't use containers, infrastructure that used them sparingly, and infrastructure where containers are the only deployment mechanism. The failure modes are different in each. The traditional "did you patch the OS" question barely applies to a fleet of ephemeral containers. The "is the firewall blocking inbound" question is meaningless when thousands of containers communicate pod-to-pod through a CNI. New failure modes emerge that didn't exist in the VM era. Privileged container escape. Image supply chain poisoning. Registry compromise. Runtime shell-in-container detection.

This is the part of security that's evolving fastest in 2026. The threat model is different from both traditional infrastructure and from Kubernetes at the cluster level. Container security sits between them, and the defense stack has to address both.

This post is the complete container security guide. What a container actually is. How containers fail. Image security from build to runtime. Registry security. Runtime protection. How this differs from Kubernetes cluster security. And what engagements we run when clients ask "is our container stack secure?"

Who this is for

  • Platform engineering leads at companies with production container fleets
  • Security engineers tasked with containers as part of cloud workload protection
  • DevSecOps practitioners integrating container security into pipelines
  • Compliance officers where containers hold regulated data

Pairs with our Kubernetes Security Complete Hardening Guide. This post focuses specifically on containers and container images. The K8s post covers orchestration.

What a container actually is

Before the failure modes, the abstraction.

A container is a Linux process running in isolated namespaces (PID, network, mount, UTS, IPC, user) with resource limits (cgroups) and a restricted filesystem view (often using overlay filesystems). Not a VM. Not a process on bare Linux. Something in between.

The isolation boundary is specifically:

  • Separate process tree (can't see host processes)
  • Separate network stack (unless hostNetwork)
  • Separate filesystem view (unless hostPath mount)
  • Separate user IDs (with user namespaces)
  • Resource limits (CPU, memory, I/O)

Important: the kernel is shared. The container uses the host kernel. A container escape that reaches the host kernel has broken out of isolation.

Also important: default Docker / Podman / containerd configurations aren't equivalent. Podman runs rootless by default. Docker historically didn't.

The failure modes

Six distinct classes.

1. Container escape via privileged configuration

A container configured with:

  • --privileged (full host capabilities)
  • Host namespace access (--pid=host, --net=host, --ipc=host)
  • Writable host mount (-v /:/host:rw)
  • Docker socket access (-v /var/run/docker.sock:/var/run/docker.sock)

...can break out to host-level access trivially.

Defense: reject these configurations at admission time. Pod Security Standards (Kubernetes) or equivalent policy enforcement.

2. Container escape via kernel exploit

The shared kernel is the attack surface. CVEs in container runtimes (runc, containerd) or Linux kernel features (cgroups, namespaces) have historically enabled container-to-host escape.

Examples:

  • CVE-2019-5736 (runc)
  • CVE-2022-0185 (Linux kernel, cgroups)
  • CVE-2022-0492 (Linux kernel, cgroups release_agent)
  • CVE-2024-0193 (Linux kernel, netfilter)

Defense: patch aggressively. Runtime monitoring for escape indicators. User namespace enforcement.

3. Image supply chain compromise

The image you pulled has been tampered with:

  • Malicious base image
  • Compromised dependency installed during build
  • Malicious typosquatted layer
  • Compromised official image (has happened)

Defense: image signing (Cosign / Sigstore). SBOM analysis. Base image allowlist. Build reproducibility.

4. Runtime malicious behavior

A legitimate application container starts behaving maliciously due to:

  • Compromised dependency that runs malicious code at runtime
  • RCE in the application
  • Insider deployment of malicious image

Defense: runtime security monitoring (Falco, Tetragon, commercial equivalents). Behavior baselines.

5. Secret exposure in images

Credentials baked into images:

  • Hardcoded in Dockerfile
  • Copied via ENV
  • Left in intermediate layers during build
  • In .git directories that got copied

Defense: secrets scanning in CI. Runtime secret injection (see our secrets management guide).

6. Registry compromise

The registry that hosts your images is compromised:

  • Attacker replaces images
  • Attacker harvests pull credentials
  • Public registry exposed (see our Docker Registry Security post)

Defense: registry access controls. Pull credential rotation. Image digest pinning (not just tags).

Image security

Base image selection

The foundation of every container is a base image. Choices matter.

Distroless (Google): absolute minimum. No shell, no package manager, no curl.

  • Pros: minimum attack surface, smallest footprint
  • Cons: hard to debug, not all software works

Minimal OS (Alpine, Chainguard Wolfi): small footprint, has shell.

  • Pros: debuggable, small, often rebuilt
  • Cons: Alpine uses musl libc which has edge cases

Standard Linux (Ubuntu, Debian, RHEL UBI): full distro.

  • Pros: familiar, broad software support
  • Cons: larger attack surface

Vendor images (Bitnami, official Docker images): opinionated.

  • Pros: preconfigured for specific software
  • Cons: trust the vendor's build process

Recommendation: distroless or Chainguard Wolfi for production. Standard Linux for build environments. Never :latest tag.

Dockerfile best practices

Non-negotiable:

  • Multi-stage builds (build with full toolchain, copy only needed artifacts to final image)
  • Specific base image versions (not :latest)
  • No secrets in layers (use build args + multi-stage)
  • USER directive to run as non-root
  • Minimal packages (only what's needed)
  • Remove package manager cache in same layer as install
  • Pin dependency versions

Image scanning

Every image scanned at multiple points:

  • In CI on build (block critical CVEs)
  • In registry continuously (new CVE discovered after image built)
  • Before deployment (admission-time check)
  • At runtime (Wiz, Lacework, Defender for Containers)

Tools:

  • Trivy (free, excellent)
  • Grype (free)
  • Snyk Container (commercial)
  • Docker Scout
  • Cloud-native (ECR scanning, ACR scanning, GCR scanning)

Image signing

Not universally deployed but becoming standard.

  • Sigstore / Cosign: sign images at build time, verify at admission time
  • Notary v2: similar model
  • Content Trust / DCT: legacy Docker approach

Why it matters: signed images prove the image came from your build system. An attacker who modified an image can't produce a valid signature without stealing your signing key.

Admission policy enforces signature presence.

SBOM (Software Bill of Materials)

For every image, the list of components inside.

Formats:

  • SPDX
  • CycloneDX

Generated at build time with tools like Syft. Stored alongside image.

Use cases:

  • Quick vulnerability impact analysis ("which of our images contain vulnerable log4j?")
  • License compliance
  • Regulatory (emerging requirement)

Registry security

Your registry holds the images that define production. It's critical infrastructure.

Private vs. public registries

Never pull production images from Docker Hub directly:

  • Rate limits (since 2020)
  • Public image compromise (has happened)
  • Namespace squatting

Mirror images to your private registry. Pull from there.

Registry options

Cloud-native:

  • AWS ECR (IAM-integrated, private by default)
  • Azure Container Registry (AAD-integrated)
  • Google Artifact Registry (IAM-integrated)

Self-hosted:

  • Harbor (most featured, open source)
  • Docker Registry / Distribution (basic)
  • Nexus Repository (broader, includes non-container)
  • JFrog Artifactory (enterprise)

SaaS:

  • Docker Hub (free public, paid private)
  • GitHub Container Registry
  • GitLab Container Registry

Access control

  • Authentication required (no anonymous pull for private images)
  • Authorization per repository
  • Pull credentials rotated regularly
  • Service account identities for CI/CD pulls (not static keys)
  • Audit logging

Immutable tags

Tags should be immutable. Don't allow overwriting v1.2.3 with new content. Use digests for cross-environment reference:

docker pull myregistry/myapp@sha256:a3b8c9d2...

vs. the mutable tag reference:

docker pull myregistry/myapp:v1.2.3

Digest pinning prevents silent image substitution.

Runtime security

Detecting + preventing bad behavior at runtime.

Admission-time policy

Before a container starts, validate it:

  • Image comes from approved registry
  • Image is signed
  • Image is scanned (no critical CVEs)
  • Configuration doesn't request dangerous privileges
  • Resource limits set
  • Non-root user

Enforced via Kubernetes admission controllers (Pod Security Standards, Kyverno, Gatekeeper) or equivalent at the runtime layer.

Runtime behavioral detection

Once a container runs, detect suspicious activity:

  • Unexpected shell execution
  • File writes to system directories
  • Network connections to suspicious destinations
  • Kernel exploitation attempts
  • Privilege escalation attempts

Tools:

  • Falco (eBPF-based, rules-based, open source)
  • Tetragon (eBPF-based, Cilium project)
  • Wiz Runtime (commercial)
  • Lacework (commercial)
  • Sysdig Secure (commercial)
  • AWS GuardDuty Runtime Protection (cloud-native)

Runtime isolation enhancement

Default container isolation is the shared-kernel baseline. For higher assurance:

  • gVisor — Google's user-space kernel that intercepts syscalls. Reduces kernel attack surface.
  • Kata Containers — lightweight VMs that look like containers. Hardware virtualization isolation.
  • Firecracker — AWS microVM for Lambda and Fargate. Even more isolated.

These trade performance for stronger isolation. For sensitive workloads (multi-tenant SaaS, high-value targets), worth the cost.

Drift detection

Containers are supposed to be immutable. If files change at runtime:

  • Binary replacement (malware)
  • Configuration tampering
  • Unexpected persistence mechanism

Runtime file integrity monitoring catches this. Falco + Tetragon rules. Commercial runtime tools.

Container + Kubernetes defense integration

Where container security overlaps + differs from Kubernetes cluster security.

Pod Security Standards enforce container configurations

Kubernetes Pod Security Standards (baseline / restricted profiles) prevent dangerous container configurations at admission:

  • No privileged: true
  • No hostPath mounts to sensitive paths
  • No hostNetwork, hostPID, hostIPC
  • Must run as non-root
  • Must have read-only root filesystem

This is container security enforced at the K8s layer.

Network policies limit container-to-container communication

NetworkPolicies (Kubernetes) isolate containers from each other. Default-deny + explicit allow limits lateral movement.

Image signing enforced at admission

Kyverno or Gatekeeper policies verify image signatures before allowing container creation.

Runtime detection on the node

Falco / Tetragon / commercial runtime tools monitor container activity across the cluster. Alerts on suspicious patterns.

Specific production attack patterns

The cryptojacking container

Attacker deploys a container that mines cryptocurrency. Consumes resources. Bills accumulate. Discovery usually comes from cost anomaly, not security detection.

Defense:

  • Resource limits + quotas
  • Cost monitoring with anomaly detection
  • Runtime detection of mining processes

The reverse shell container

Attacker with ability to create containers deploys a container with a reverse shell back to their C2. Uses the container as a pivot into the cluster network.

Defense:

  • Egress network policies
  • Runtime detection of outbound connections
  • Image source allowlist

The host-filesystem-mounting container

Attacker with RBAC that permits pod creation deploys a pod with hostPath: / mounted. Reads every file on the node including kubelet credentials.

Defense:

  • Pod Security Standards: restricted profile
  • Admission policy rejecting hostPath mounts
  • Minimize pod-create RBAC permissions

The credential-harvesting init container

Attacker deploys a pod with an innocuous init container that reads all mounted secrets and exfiltrates.

Defense:

  • Workload identity (no secrets mounted to pods)
  • Runtime exfiltration detection
  • Secrets policy limiting what pods can access

Regulatory compliance considerations

Containers and regulatory frameworks.

PCI DSS 4.0

Container environments handling cardholder data face:

  • Requirement 6.4.3 (script integrity) applies to containerized web apps
  • Requirement 6.5.x (secure development) includes container build pipelines
  • Requirement 11.4 (pentest) must cover container infrastructure

HIPAA

Containers processing ePHI:

  • Encryption at rest + in transit
  • Audit logging (complicated by ephemeral container logs)
  • Access controls via Kubernetes RBAC + runtime identity

SOC 2

Container security controls map to:

  • CC6.1 (logical access) — via Kubernetes RBAC + workload identity
  • CC6.6 (vulnerability management) — container image scanning
  • CC6.8 (detection) — runtime detection tools

FedRAMP

Container environments for federal workloads:

  • Hardened base images (typically RHEL UBI or similar)
  • FIPS-compliant cryptography
  • Boundary scanning + monitoring

Tools and licenses

Expected annual spending for mid-market container security:

  • Open source stack (Trivy + Syft + Falco + Cosign): $0 licensing, significant operational cost
  • Wiz Container Security: $100K-$500K/year depending on scale
  • Lacework: $80K-$400K/year
  • Palo Alto Prisma Cloud: $100K-$600K/year
  • Sysdig Secure: $80K-$350K/year
  • Snyk Container: $30K-$150K/year (less runtime-focused)
  • Aqua Security: $80K-$300K/year

Most mid-market organizations run an open source scanning + signing stack plus one commercial runtime tool.

The container security assessment

On a container security engagement, our checklist:

  • Image build pipeline review (Dockerfile, base images, multi-stage)
  • SBOM coverage
  • Image scanning integration in CI
  • Image signing implementation
  • Registry configuration
  • Admission controls in Kubernetes
  • Pod Security Standards enforcement
  • NetworkPolicy coverage
  • Workload identity vs. static credentials
  • Runtime detection deployment
  • Log aggregation + retention
  • Incident response integration

Output: gap analysis + prioritized remediation roadmap.

The 10 fastest wins

For a team starting from baseline:

  1. Replace :latest tags with specific versions everywhere
  2. Pod Security Standards set to restricted on every workload namespace
  3. NetworkPolicy default-deny in every namespace
  4. Trivy in CI blocking critical CVEs
  5. Base image migration to distroless or Chainguard
  6. Image signing with Cosign for production pipelines
  7. Runtime security (Falco at minimum) deployed
  8. SBOM generation for every image
  9. Private registry with authentication, immutable tags, digest pinning
  10. Workload identity replacing static cloud credentials

Working with us

We run container security engagements as part of cloud + Kubernetes work. Our typical assessment includes:

  • Container build pipeline security review
  • Registry configuration audit
  • Runtime security assessment
  • Integration with overall Kubernetes + cloud posture
  • Compliance alignment (PCI, HIPAA, SOC 2 as applicable)

Pairs with full Kubernetes audits for organizations running production on K8s.

Valtik Studios, valtikstudios.com.

container securitydocker securityimage scanningimage signingruntime securityfalcosbomcomplete guide

Want us to check your Container Security setup?

Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.

Get new research in your inbox
No spam. No newsletter filler. Only new posts as they publish.