Cloud Security
Kubernetes
DevSecOps
Container Security
Cloud Security

Top 10 Kubernetes Security Misconfigurations (With Fix Commands)

SecureCodeReviews Team
January 25, 2025
16 min read
Share

The State of Kubernetes Security

Kubernetes powers over 60% of containerized workloads in production. Yet a 2024 Red Hat survey found that 67% of organizations delayed or slowed deployment due to security concerns, and 46% experienced a security incident related to containers or Kubernetes.

The root cause? Misconfiguration, not zero-days. The vast majority of Kubernetes security incidents stem from default configurations that are designed for convenience, not security.


#1: Running Containers as Root

By default, containers run as root (UID 0). If an attacker escapes the container, they're root on the node.

❌ Vulnerable Pod

apiVersion: v1
kind: Pod
metadata:
  name: web-app
spec:
  containers:
  - name: app
    image: myapp:latest
    # No securityContext = runs as root

✅ Fixed Pod

apiVersion: v1
kind: Pod
metadata:
  name: web-app
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  containers:
  - name: app
    image: myapp:latest
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop: ["ALL"]

Check Command

# Find pods running as root
kubectl get pods --all-namespaces -o json | \
  jq '.items[] | select(.spec.containers[].securityContext.runAsNonRoot != true) | .metadata.name'

#2: No Network Policies (Flat Network)

By default, every pod can talk to every other pod in a Kubernetes cluster. If one pod is compromised, an attacker can reach databases, internal services, and the API server.

❌ Default: No Network Policies

# Check for network policies
kubectl get networkpolicies --all-namespaces
# If empty, every pod can communicate with every other pod

✅ Fixed: Default-Deny + Allow Rules

# Default deny all ingress/egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
# Allow only what's needed
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-web-to-api
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-server
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: web-frontend
    ports:
    - port: 8080

#3: Secrets Stored in Plain Text

Kubernetes "Secrets" are just base64-encoded — not encrypted.

❌ False Sense of Security

# "Secret" is just base64
kubectl get secret db-credentials -o jsonpath='{.data.password}' | base64 -d
# Output: MyPlainTextPassword123

✅ Fix: Enable Encryption at Rest

# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
  - secrets
  providers:
  - aescbc:
      keys:
      - name: key1
        secret: <base64-encoded-32-byte-key>
  - identity: {}

Better yet, use external secret managers like HashiCorp Vault, AWS Secrets Manager, or the External Secrets Operator.


#4: RBAC Too Permissive (cluster-admin Everywhere)

# Find overprivileged service accounts  
kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.roleRef.name=="cluster-admin") | .subjects[]'

If you see application service accounts bound to cluster-admin, that's a critical finding.

✅ Fix: Least-Privilege RBAC

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: app-role
rules:
- apiGroups: [""]
  resources: ["pods", "services"]
  verbs: ["get", "list"]        # Read-only
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get"]                 # Specific configmaps only
  resourceNames: ["app-config"]  # Named resources

#5: No Resource Limits (Resource Bomb DoS)

Without limits, a single pod can consume all CPU/memory on a node.

✅ Fix: Set Limits + LimitRange

apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
  namespace: production
spec:
  limits:
  - default:
      memory: "512Mi"
      cpu: "500m"
    defaultRequest:
      memory: "256Mi"
      cpu: "100m"
    type: Container

#6: Using Latest Tag for Images

# ❌ Never do this in production
image: myapp:latest

# ✅ Pin to specific digest
image: myapp@sha256:abc123def456...

#7: Dashboard Exposed Without Authentication

The Kubernetes Dashboard, if exposed, gives full cluster control.

# Check if dashboard is exposed
kubectl get svc -n kubernetes-dashboard
kubectl get ingress -n kubernetes-dashboard

#8: API Server Publicly Accessible

# Check API server access
kubectl cluster-info
# If the endpoint is a public IP, restrict it

✅ Fix: Restrict API Server Access

# EKS: Restrict to private endpoint
aws eks update-cluster-config \
  --name my-cluster \
  --resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true

#9: No Pod Security Standards

Since PodSecurityPolicy was deprecated in 1.25, use Pod Security Admission.

✅ Fix: Enforce Pod Security Standards

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted

#10: No Audit Logging

Without audit logs, you won't know when someone accesses sensitive resources.

✅ Fix: Enable Audit Policy

apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
  resources:
  - group: ""
    resources: ["secrets"]
- level: Metadata
  resources:
  - group: ""
    resources: ["pods", "services"]

Quick Audit Checklist

#CheckSeverity
1Containers running as rootCritical
2No network policiesCritical
3Unencrypted secretsHigh
4Overprivileged RBACCritical
5No resource limitsMedium
6Latest image tagsMedium
7Dashboard exposedCritical
8Public API serverCritical
9No pod security standardsHigh
10No audit loggingHigh

Need a Kubernetes Security Audit?

We review Kubernetes manifests, Helm charts, and cluster configurations. Request a free sample review →


Published by the SecureCodeReviews.com team — trusted by DevOps teams securing production Kubernetes clusters.

Advertisement