Kubernetes control plane: What it is and how to secure it

Wiz Experts Team
Key takeaways
  • The Kubernetes control plane orchestrates cluster operations through core components that manage workloads, scheduling, and state consistency

  • Control plane components require specific security hardening to prevent unauthorized access and protect cluster integrity

  • High availability configurations with multiple control plane nodes eliminate single points of failure in production environments

  • Comprehensive visibility across control plane components enables proactive identification and remediation of security risks

What is the Kubernetes control plane?

The Kubernetes control plane is the cluster’s management layer that exposes the API, stores cluster state, and continuously reconciles desired configuration—scheduling, scaling, and replacing pods as needed—to keep applications healthy and consistent across nodes.

Kubernetes Security Best Practices [Cheat Sheet]

This 6 page cheat sheet goes beyond the basics and covers security best practices for Kubernetes pods, components, and network security

Core components of the Kubernetes control plane

The control plane isn't just one piece of software—it's made up of several components that work together. Each component has a specific job, and they all communicate with each other to keep your cluster running smoothly.

kube-apiserver

The API server is like the front desk of a hotel—everything goes through it. When you use kubectl commands or when other parts of Kubernetes need to do something, they all talk to the API server first. It checks if you're allowed to do what you're asking for, then either approves or rejects your request.

The API server also validates everything you send to make sure it makes sense. If you try to create a pod with invalid settings, the API server catches this before it causes problems in your cluster.

etcd

etcd is your cluster's memory bank where all important information gets stored. It remembers every configuration you've made, all your secrets, and the current state of everything in your cluster. Think of it like a super-reliable filing cabinet that never loses anything.

This database uses special technology called the Raft consensus algorithm to stay consistent even when multiple copies exist. If one copy gets damaged, the others can take over without losing any data.

kube-scheduler

The scheduler is like a smart assistant that decides where to place your applications. When you create a new pod, the scheduler looks at all your worker nodes and picks the best one based on available resources, special requirements, and other rules you've set up.

It considers things like:

  • How much CPU and memory each node has available

  • Whether your pod needs to run on specific types of nodes

  • If your pod should be close to certain data or away from other pods

kube-controller-manager

The controller manager runs several smaller programs called controllers that each watch for specific problems and fix them. It's like having multiple specialized maintenance workers who each focus on different parts of your building.

These controllers include:

  • Node Controller: Watches for nodes that stop working and marks them as unhealthy

  • ReplicaSet Controller: Ensures the desired number of pod replicas are running at all times (typically managed through Deployments rather than directly)

  • Endpoints Controller: Keeps track of which pods are available to receive traffic

  • Service Account Controller: Creates default accounts for new namespaces

cloud-controller-manager

When you run Kubernetes in the cloud, this component talks to your cloud provider's services. It handles cloud-specific tasks like creating load balancers, setting up storage volumes, and configuring network routes.

This separation keeps Kubernetes flexible—it can work with any cloud provider without needing to know the specific details of each one.

Control plane vs data plane architecture

Kubernetes splits its work between two main areas: the control plane and the data plane. Understanding this split helps you see how Kubernetes organizes itself and where security matters most.

The control plane makes all the management decisions but doesn't run your actual applications. It's like the management office of a factory—it plans what should happen and gives instructions, but it doesn't operate the machines that make products.

The data plane consists of worker nodes that actually run your containers and applications. These nodes contain three main components:

  • Kubelet: Acts like a local manager on each node, making sure containers start and stay healthy

  • Container runtime: The software that actually runs your containers (like Docker or containerd)

  • Kube-proxy: Implements Service routing on each node using iptables or IPVS rules (note: some modern CNIs like Cilium replace kube-proxy with eBPF-based data paths for better performance)

The control plane and data plane communicate through the API server. Worker nodes regularly check in to get new instructions and report back on what's happening. This design means existing workloads continue running if the control plane becomes temporarily unavailable, but new pods won't be scheduled and configuration changes won't apply until the control plane recovers.

Security implications of control plane components

Each part of the control plane creates different security risks that you need to understand and protect against. If attackers compromise any of these components, they can cause serious damage to your entire cluster.

API server vulnerabilities

The API server is your biggest security concern because it serves as the primary entry point to your cluster. If an attacker gains unauthorized access to the API server, they can control every resource in your Kubernetes environment—from reading secrets to deploying malicious workloads.

Common API server security problems include:

  • Anonymous access: Older versions sometimes allow anyone to connect without authentication

  • Unencrypted connections: Insecure configurations that let attackers intercept communications

  • Weak validation: Missing admission controllers that don't properly check requests before allowing them

  • Poor logging: Inadequate audit trails that make it hard to detect or investigate attacks

etcd security concerns

etcd stores everything important about your cluster, including all your secrets and passwords. If someone compromises etcd, they essentially own your entire Kubernetes environment and can access any data or system within it.

Key etcd security risks include:

  • Unencrypted storage: Sensitive data stored in plain text that anyone with file access can read

  • Insecure communication: Missing encryption between etcd servers that allows eavesdropping

  • Weak access controls: Insufficient restrictions on who can read or modify the database

  • No backups: Lack of recovery options if data gets corrupted or deleted by attackers

Scheduler and controller manager risks

While these components don't directly face the internet, compromising them still creates serious security problems. Attackers who gain access can manipulate how your cluster behaves in subtle but dangerous ways.

Potential attack scenarios include:

  • Malicious scheduling: Forcing pods to run on specific nodes to help with further attacks

  • Resource exhaustion: Creating too many replicas to overwhelm your cluster

  • Traffic redirection: Changing service endpoints to send data to attacker-controlled systems

  • Privilege escalation: Creating powerful service accounts for persistent access

Identity and access management

The control plane manages all the permissions and access controls for your cluster. Weaknesses in this system can give attackers more access than they should have.

Common identity-related vulnerabilities include:

  • Over-privileged accounts: Service accounts with more permissions than they need

  • Exposed tokens: Authentication credentials accidentally included in logs or container images

  • Missing RBAC: Lack of proper role-based access controls

  • Weak pod policies: Insufficient restrictions on what containers can do

The challenge isn't just finding control plane vulnerabilities—it's understanding which ones create real attack paths. Context-aware analysis that correlates API server exposure, RBAC privileges, Secrets access, and network paths helps teams prioritize the few issues that create exploitable attack chains. For example, an overprivileged service account matters more when it runs in an internet-exposed pod with access to production secrets than when it runs in an isolated namespace with no external connectivity.

Free 1-on-1 Kubernetes Risk Assessment

Move fast with containerized apps—safely. Assess your Kubernetes security posture and close gaps across build-time and runtime.

Best practices for securing the Kubernetes control plane

Securing your control plane requires multiple layers of protection. You can't rely on just one security measure—you need a comprehensive approach that addresses all the potential attack vectors.

Enable and enforce RBAC

Role-Based Access Control (RBAC) lets you control exactly what each user and service account can do in your cluster. Set up specific roles for different types of users and avoid giving anyone more permissions than they actually need.

Create separate roles for developers, operators, and automated systems, tying permissions to service ownership so teams manage access for their own workloads. Regularly review these permissions to identify dormant or over-privileged roles—service accounts that haven't been used in 90+ days or have permissions they never exercise. Remove any privileges that are no longer needed to reduce standing access and blast radius. The NSA/CISA Kubernetes Hardening Guidance specifically identifies overprivileged RBAC configurations as a primary attack vector.

Secure etcd communications and storage

Protect your cluster's database by encrypting all data at rest and in transit. Set up mutual TLS between all etcd servers so they can verify each other's identity before sharing information.

Restrict network access to etcd using firewalls or network policies. Only the API server should be able to talk to etcd directly—no other components or users should have direct access.

Harden the API server

Strengthen your API server by disabling anonymous authentication and requiring proper credentials for all requests. Enable comprehensive audit logging so you can track who did what and when.

Configure admission controllers to validate all requests before they're processed. Prevent configuration drift by enforcing policies in CI pipelines (scanning IaC templates before merge) and blocking non-compliant manifests at admission time (using ValidatingAdmissionWebhook with policy engines like OPA Gatekeeper or Kyverno). This shift-left approach catches misconfigurations during development instead of after deployment. Set up rate limiting with --max-requests-inflight and --max-mutating-requests-inflight flags to prevent API server overload.

Control plane hardening checklist

Implement these specific configurations to harden your control plane:

API Server flags:

  • --anonymous-auth=false – Disable anonymous authentication

  • --authorization-mode=RBAC,Node – Enable RBAC and Node authorization

  • --audit-log-path=/var/log/audit.log – Enable comprehensive audit logging

  • --audit-log-maxage=30 – Retain audit logs for 30 days minimum

  • --enable-admission-plugins=NodeRestriction,PodSecurityAdmission – Enforce admission controls

  • --encryption-provider-config=/etc/kubernetes/encryption-config.yaml – Enable Secrets encryption at rest

  • --tls-cert-file and --tls-private-key-file – Enforce TLS for all connections

Controller Manager and Scheduler:

  • --bind-address=127.0.0.1 – Bind to localhost only (not exposed externally)

  • --use-service-account-credentials=true – Use individual service accounts per controller

Network and access controls:

  • Configure API server CIDR allowlists to restrict source IPs

  • Enable private endpoints (EKS private endpoint mode, GKE private clusters, AKS private link)

  • Implement Pod Security Admission at baseline or restricted level

  • Deploy admission webhooks for policy enforcement (OPA Gatekeeper, Kyverno)

  • Require signed container images with admission controller validation

Network segmentation and policies

Isolate your control plane components on separate network segments to limit their exposure. Use network policies to control which pods can communicate with each other and implement a "deny by default" approach. The CIS Kubernetes Benchmark recommends this posture to prevent lateral movement between compromised workloads.

Example: Default deny all ingress traffic

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress namespace: production spec: podSelector: {} policyTypes: - Ingress

Example: Allow specific namespace-to-namespace communication

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-frontend-to-backend namespace: production spec: podSelector: matchLabels: tier: backend policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: frontend - podSelector: matchLabels: tier: web ports: - protocol: TCP port: 8080

Example: Restrict control plane node access

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: restrict-control-plane-access namespace: kube-system spec: podSelector: matchLabels: component: kube-apiserver policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.0.0.0/8 # Internal network only ports: - protocol: TCP port: 6443

Enable mutual TLS authentication for all control plane component communications (API server to etcd, controller-manager to API server, scheduler to API server). Configure private endpoints for API server access to keep the control plane off the public internet—AWS EKS offers private endpoint mode, GKE provides private clusters, and AKS supports private link.

Regular updates and patch management

Keep your Kubernetes version current and apply security patches quickly after testing them. Monitor security advisories from the Kubernetes project to stay informed about new threats and vulnerabilities.

Implement automated scanning for known vulnerabilities in your container images and configurations. This helps you catch problems before they can be exploited.

Secrets management

Enable encryption at rest for Kubernetes Secrets using a KMS provider for envelope encryption (AWS KMS for EKS, Cloud KMS for GKE, Azure Key Vault for AKS). Rotate credentials regularly using automated tools. For enhanced security, consider external secret management systems like HashiCorp Vault or AWS Secrets Manager that integrate via the Secrets Store CSI driver.

Always follow the principle of least privilege when granting access to secrets. Only give applications and users access to the specific secrets they actually need.

Compliance and framework alignment

Map your control plane security controls to established frameworks:

CIS Kubernetes Benchmark v1.8:

  • Section 1.2: API Server configuration (anonymous auth, authorization modes, admission plugins)

  • Section 1.3: Controller Manager configuration (service account credentials, bind address)

  • Section 1.4: Scheduler configuration (bind address, profiling)

  • Section 2.1: etcd configuration (client cert auth, peer cert auth, encryption)

NSA/CISA Kubernetes Hardening Guidance:

  • Scan containers and pods for vulnerabilities and misconfigurations

  • Run containers and pods with the least privileges possible

  • Use network separation and hardening to control traffic

  • Use firewalls to limit unneeded network connectivity

  • Use strong authentication and authorization (RBAC with least privilege)

SOC 2 and ISO 27001 controls:

  • Access control (RBAC, authentication, authorization)

  • Encryption in transit and at rest (TLS, KMS-based Secrets encryption)

  • Audit logging and monitoring (API audit logs, admission decisions)

  • Change management (GitOps for cluster configuration, admission webhooks)

  • Incident response (runtime detection, forensic capabilities)

PCI DSS requirements (for payment processing workloads):

  • Requirement 2: Change vendor defaults, disable unnecessary services

  • Requirement 8: Identify and authenticate access (RBAC, service accounts)

  • Requirement 10: Track and monitor all access (audit logging)

  • Requirement 11: Regularly test security systems (vulnerability scanning, penetration testing)

High availability and scaling considerations

Production Kubernetes clusters need high availability to keep running even when individual components fail. Planning for this from the beginning helps you avoid outages and maintain reliable service.

Multi-master configurations

High availability control planes use multiple nodes to eliminate single points of failure. Deploy an odd number of control plane nodes (usually three or five) across different availability zones to maintain quorum even if some nodes go down.

Use load balancers to distribute API server traffic across all control plane nodes. Configure etcd as a cluster with multiple members so your data stays available even if individual etcd nodes fail.

Scaling strategies for large clusters

As your cluster grows, you may need to scale the control plane to handle the increased load. Consider separating etcd from other control plane components so you can scale them independently based on their specific resource needs.

Monitor control plane performance metrics to identify bottlenecks before they cause problems. Use dedicated nodes for resource-intensive controllers and set appropriate resource limits to ensure stable performance.

Backup and disaster recovery

Implement automated etcd backups with point-in-time recovery capabilities. Document and regularly test your restoration procedures to make sure they work when you need them.

Maintain your cluster configuration as infrastructure as code so you can quickly rebuild if necessary. Automated inventory and relationship mapping accelerate rebuilds by documenting all dependencies—which services connect to which databases, which secrets each workload requires, which network policies govern traffic flow. This context speeds up DR testing and validation, ensuring restored clusters match production topology. Consider cross-region replication for critical clusters that need maximum availability.

Performance optimization

Monitor key metrics like API server response times and etcd performance to spot problems early. Tune component settings based on your cluster size and workload patterns.

Use SSD storage for etcd to improve performance, and implement caching for frequently accessed data. These optimizations help your control plane handle larger workloads more efficiently.

Managed service considerations

Cloud provider managed Kubernetes services handle many availability concerns automatically, including control plane availability, scaling, and upgrades. They provide zonal or regional redundancy with SLA guarantees (typically 99.95% for regional configurations). However, customers remain responsible for backing up application data and cluster configuration (RBAC, network policies, custom resources).

However, you still need to manage your part of the security equation. This includes configuring RBAC properly, setting up network policies, implementing application-level disaster recovery, and monitoring for security events.

Security responsibility matrix

Security ControlManaged Kubernetes (EKS/GKE/AKS)Self-Hosted Kubernetes
Control plane availabilityProvider manages HA, upgrades, patchingCustomer configures multi-master, manages updates
API server hardeningProvider sets base flags; customer configures RBAC, admissionCustomer configures all API server flags and policies
etcd securityProvider manages encryption, backups, accessCustomer configures encryption, backups, network isolation
Network policiesCustomer configures all pod-to-pod rulesCustomer configures all pod-to-pod rules
RBAC configurationCustomer configures all roles and bindingsCustomer configures all roles and bindings
Secrets encryptionCustomer enables KMS provider integrationCustomer configures encryption provider and key management
Audit loggingCustomer enables and ships logs to SIEMCustomer configures audit policy and log retention
Node securityCustomer hardens node OS and container runtimeCustomer hardens node OS and container runtime
Compliance validationShared: provider certifies infrastructure, customer validates workloadsCustomer validates entire stack

How Wiz provides comprehensive Kubernetes security visibility

Wiz delivers agentless, code-to-cloud visibility into your Kubernetes control plane with no workload impact. Here’s how it helps you find and fix risks faster:

  • Agentless posture assessment: Continuously evaluates API server exposure and configuration, RBAC permissions, admission controller policies, and network settings via cloud provider APIs and Kubernetes APIs.

  • Security Graph attack path mapping: Connects resources, identities, network exposure, and data access to reveal real attack paths and blast radius. For example, it highlights when an internet-exposed pod with an overprivileged service account can read Secrets with database credentials—prioritizing remediation on the highest-impact risks.

  • Automatic detection and correlation: Flags exposed APIs, insecure etcd configurations, and overprivileged service accounts, then correlates them with factors like network exposure and leaked secrets to identify the most dangerous combinations.

  • Real-time monitoring with Wiz Defend: Analyzes cloud audit logs (CloudTrail, Cloud Logging, Azure Activity Logs) and runtime telemetry to surface suspicious control plane activity—such as bulk secret reads, unauthorized namespace creation, role binding modifications, and failed authentication from unknown IPs.

  • Shift-left with Wiz Code: Scans Kubernetes manifests and infrastructure-as-code before deployment to prevent control plane misconfigurations from reaching production.

  • Unified visibility across managed services: Works seamlessly with EKS, GKE, and AKS to enforce consistent security policies across different Kubernetes distributions.

Ready to secure your Kubernetes control plane end to end? Get a demo to see how agentless, code-to-cloud visibility and graph-based attack path analysis protect your clusters—from API server exposure to runtime threats.