What is a Kubernetes node?
A Kubernetes node is a worker machine in your cluster that runs containerized applications. This means it's the actual computer (physical or virtual) that does the work of running your apps.
Each node contains the services necessary to run pods and is managed by the control plane components. Nodes can be either physical servers in on-premises data centers or virtual machines running in cloud environments like AWS EC2, Google Compute Engine, or Azure Virtual Machines.
Every node runs essential components including the kubelet (which ensures containers are running in pods), the container runtime (typically containerd or CRI-O as the Container Runtime Interface implementation), and kube-proxy (which maintains network rules, though eBPF-based CNI dataplanes like Cilium can replace it entirely). The node provides the actual compute resources—CPU, memory, storage, and networking—that pods consume when running applications.
Kubernetes Security Best Practices [Cheat Sheet]
This cheat sheet goes beyond the basics and covers security best practices for Kubernetes pods, components, and network security

What is a Kubernetes pod?
A pod is the smallest deployable unit in Kubernetes that you create or deploy. Rather than running containers directly, Kubernetes groups one or more containers into a pod, which serves as a wrapper that provides shared resources like storage volumes, network namespace, and specifications for how to run the containers.
Containers within the same pod share an IP address and port space, can communicate with each other using localhost, and have access to shared storage volumes. This tight coupling enables three standard multi-container patterns: sidecar (logging/monitoring agent alongside main app), ambassador (proxy that simplifies external service connections), and adapter (standardizes output format from heterogeneous containers). For example, a sidecar pattern might pair an nginx container with a Fluent Bit container that ships logs to Elasticsearch. However, most pods contain just a single container, following the principle of one primary application per pod.
Kubernetes nodes vs pods: Core differences
The fundamental distinction between nodes and pods lies in their role within the Kubernetes hierarchy. Nodes are infrastructure-level components that provide the physical or virtual resources, while pods are application-level abstractions that consume those resources.
Aspect | Node | Pod |
---|---|---|
Definition | Physical or virtual machine in the cluster | Group of one or more containers with shared resources |
Resource Role | Provides CPU, memory, storage, networking | Consumes resources from the node it runs on |
Lifecycle | Managed lifespan—replaced by autoscaling or upgrades | Ephemeral - created, destroyed, and recreated as needed |
Scheduling | Provisioned by infrastructure or cluster autoscaler | Scheduled onto nodes by the Kubernetes scheduler |
Networking | Has its own IP address on the network | Shares IP address among all containers within the pod |
Failure Handling | Node failure affects all pods running on it | Pod failure only affects that specific application instance |
Quick diagnostic commands:
# View all nodes with IP addresses and roles kubectl get nodes -o wide # Inspect node details, conditions, and capacity kubectl describe node # List all pods across namespaces with node placement kubectl get pods -A -o wide # Examine pod events and configuration kubectl describe pod -n # Check real-time resource usage kubectl top nodes kubectl top pods -A
These commands help you quickly assess node health, pod placement, and resource consumption across your cluster.
A node pool (or node group in EKS) represents a group of nodes sharing the same instance type, availability zone, and configuration—enabling different workload requirements like GPU nodes for ML or spot instances for batch jobs.
Node pools interact with autoscaling at two levels:
Horizontal Pod Autoscaler (HPA) scales pod replicas based on CPU/memory metrics
Cluster Autoscaler adds nodes to a pool when pods are Pending due to insufficient resources, and removes underutilized nodes after a scale-down delay
For example, if HPA scales a Deployment from 5 to 20 replicas but only 10 fit on existing nodes, the cluster autoscaler provisions additional nodes from the appropriate pool to accommodate the remaining 10 pods.
Free 1-on-1 Kubernetes Risk Assessment
Move fast with containerized apps—safely. Assess your Kubernetes security posture and close gaps across build-time and runtime.

How nodes and pods work together in Kubernetes architecture
The Kubernetes cluster orchestrates the relationship between nodes and pods through the control plane. When you deploy an application, the Kubernetes scheduler evaluates nodes based on pod resource requests, node affinity/anti-affinity rules, pod topology spread constraints, taints and tolerations, and custom scheduler policies—then binds each pod to a single node that satisfies all requirements. The kubelet on each node then takes responsibility for ensuring the assigned pods are running and healthy.
This architecture enables key Kubernetes features:
Automatic pod placement: The scheduler intelligently distributes pods across nodes based on resource availability and constraints
Self-healing: If a pod fails, Kubernetes automatically creates a replacement; if a node fails, pods are rescheduled to healthy nodes
Scaling: Horizontal pod autoscaling creates or removes pod replicas based on metrics, while cluster autoscaling adds or removes nodes based on pod resource demands
Load distribution: Multiple pod replicas can be spread across different nodes for high availability
The relationship between nodes and pods extends to Kubernetes controllers, which manage pod lifecycle:
Deployment: Manages stateless pod replicas with rolling updates (web apps, APIs)
StatefulSet: Manages stateful pods with stable network identities and persistent storage (databases, message queues)
DaemonSet: Ensures one pod per node for cluster-wide services (log collectors, monitoring agents)
Job/CronJob: Runs pods to completion for batch processing or scheduled tasks
Deployments manage the desired state of pod replicas across nodes, creating new pods before terminating old ones during updates.
Security implications of nodes and pods
Security considerations differ significantly between nodes and pods due to their distinct roles in the cluster. Organizations typically map these controls to compliance frameworks:
CIS Kubernetes Benchmark provides 100+ controls spanning node hardening (4.1: restrict kubelet permissions) and pod security (5.2: minimize privileged containers)
NIST 800-190 addresses container security across image lifecycle, runtime, and orchestration
ISO 27001 and SOC 2 require documented access controls and audit logging at both infrastructure and application layers
Policy-as-code tools like OPA Gatekeeper, Kyverno, or Wiz Policy Engine enforce these standards automatically—blocking non-compliant pod deployments and flagging misconfigured nodes before they reach production.
Agentless, unified visibility across nodes, pods, and identities enables least privilege enforcement, detects container escape attempts, and prevents lateral movement by correlating infrastructure and application context. Node security focuses on protecting the underlying infrastructure, including the operating system, kubelet, and container runtime. This involves hardening the host OS, managing SSH access, keeping system packages updated, and implementing network segmentation at the infrastructure level.
Pod security operates at the application layer and includes several key components:
Security contexts: Define privilege and access control settings for pods and containers through a security context
Pod security standards/admission: Enforce Pod Security Standards (PSS) using Pod Security Admission (PSA) controllers—PodSecurityPolicy was removed in Kubernetes v1.25 and replaced by the built-in PSA admission plugin with Restricted, Baseline, and Privileged policy levels
Network policies: Control traffic flow between pods at the application level
Service accounts: Manage pod identities and RBAC permissions within the cluster; implement least privilege by scoping roles narrowly and set automountServiceAccountToken: false for pods that don't need API access
Secrets management: Use Kubernetes Secrets with etcd encryption at rest enabled; integrate external secret stores like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault via the Secrets Store CSI Driver for automatic rotation and reduced cluster exposure
Container escape vulnerabilities represent a critical risk where an attacker breaking out of a container could compromise the entire node and potentially enable lateral movement to other pods. This highlights why defense-in-depth strategies must address both node-level and pod-level security controls.
Kubernetes control plane: What it is and how to secure it
The Kubernetes control plane is the cluster’s management layer that exposes the API, stores cluster state, and continuously reconciles desired configuration—scheduling, scaling, and replacing pods as needed—to keep applications healthy and consistent across nodes.
Read moreResource management and allocation between nodes and pods
Resource management in Kubernetes operates through a hierarchical system where nodes provide resources and pods consume them through requests and limits. Each node has allocatable resources (CPU, memory, storage) that the kubelet reserves for running pods after accounting for system daemons and Kubernetes components.
Pods specify their resource needs through two main mechanisms:
Resource requests: The minimum amount of CPU and memory guaranteed to the pod
Resource limits: The maximum amount of resources the pod can consume
Quality of Service (QoS) classes: Kubernetes assigns Guaranteed, Burstable, or BestEffort QoS based on requests and limits
The scheduler uses resource requests to determine node placement, ensuring nodes have sufficient available capacity. If a pod exceeds its memory limit, the kernel OOMKills the container immediately; CPU limits throttle usage through CFS quotas without terminating the pod. Node pressure conditions occur when nodes run low on resources, triggering pod evictions based on QoS class and resource usage.
Effective resource management requires monitoring node capacity to track resource utilization across all nodes to identify bottlenecks. Context-aware risk and capacity insights help right-size pod requests and limits, preventing OOMKilled containers and CPU throttling before they degrade application performance or trigger cascading failures in production—Kubernetes officially supports clusters with up to 5,000 nodes and 150,000 total pods according to scalability thresholds documented in the Kubernetes project. You also need to right-size pods by setting appropriate requests and limits based on actual application needs. Node affinity and anti-affinity control pod placement based on node labels, while taints and tolerations prevent or allow pods on specific nodes. For example, to dedicate GPU nodes to ML workloads:
# Taint GPU nodes kubectl taint nodes gpu-node-1 workload=ml:NoSchedule # Pod tolerates the taint tolerations: - key: workload operator: Equal value: ml effect: NoSchedule # Require GPU node via affinity affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: accelerator operator: In values: - gpu
This ensures only ML pods with the matching toleration can schedule on expensive GPU nodes.
Kubernetes Monitoring Tools and Best Practices to Know
Discover essential Kubernetes monitoring tools and best practices to optimize performance, enhance security, and ensure seamless cluster management.
Read moreMonitoring and troubleshooting nodes vs pods
Troubleshooting Kubernetes issues requires different approaches for nodes versus pods. Code-to-cloud traceability shortens mean time to resolution (MTTR) by linking failing pods to their source manifests, container images, and responsible teams—then correlating runtime detections with actual exposure paths through the security graph. Node-level problems typically manifest as systemic issues affecting multiple pods, while pod-level problems are often application-specific or related to container configuration.
For node troubleshooting, focus on these areas:
Node status and conditions: Check Ready status and condition types including MemoryPressure, DiskPressure, PIDPressure, and NetworkUnavailable—each signals specific resource exhaustion or connectivity issues
System resources: Monitor CPU, memory, disk usage, and network connectivity at the OS level
Kubelet logs: Review kubelet logs for errors related to pod lifecycle management or communication with the API server
Container runtime: Verify the container runtime is functioning and can pull images
Pod troubleshooting requires diagnosing specific failure modes:
Pending pods:
Check
kubectl describe pod
events for "Insufficient CPU/memory" → scale nodes or reduce requestsLook for "0/3 nodes available: taint" → add tolerations or remove taints
Verify "ImagePullBackOff" → check image name, registry credentials, network access
CrashLoopBackOff:
Review container logs:
kubectl logs <pod> --previous
Check liveness/readiness probe configuration—probes may be too aggressive
Verify application dependencies (database connections, config files)
NodeNotReady:
SSH to node and check
systemctl status kubelet
Verify CNI plugin:
kubectl get pods -n kube-system | grep cni
Check node conditions:
kubectl describe node
for disk/memory/PID pressure
This systematic approach isolates whether issues stem from scheduling constraints, application errors, or infrastructure failures.
Common troubleshooting commands help diagnose issues at both levels, with kubectl describe providing detailed information about resource status, events, and conditions for both nodes and pods.
How Wiz secures Kubernetes nodes and pods
Wiz provides agentless visibility across your entire Kubernetes infrastructure, scanning both nodes and pods without performance impact or deployment complexity. The platform creates a comprehensive inventory of all Kubernetes resources, from the underlying node infrastructure to individual containers running within pods, enabling security teams to understand their complete attack surface.
The Wiz Security Graph maps relationships between nodes, pods, containers, identities, sensitive data, and cloud services to reveal real attack paths across infrastructure layers—then prioritizes remediation based on exploitability (active exploits, weaponized CVEs), exposure (internet-facing, excessive permissions), and blast radius (access to crown jewel data). This contextual understanding shows how a compromised pod could escalate to node-level access or how a misconfigured node could expose multiple pods to risk. By correlating vulnerabilities, misconfigurations, network exposure, and identity permissions across both nodes and pods, Wiz identifies toxic combinations that create real security risks.
A lightweight eBPF Runtime Sensor adds real-time detections across nodes and pods—identifying container escapes, privilege escalation attempts, crypto-mining processes, reverse shells, and suspicious outbound connections—with precise, process-level response context and zero agent overhead. The sensor monitors both node-level system calls and pod-level application behavior to detect anomalous activities, unauthorized access attempts, and potential breaches as they occur.
Attack path analysis shows how a compromised pod can escalate to node-level access and then to broader cloud resources through toxic risk combinations. This comprehensive view enables prioritized remediation based on actual exploitability rather than theoretical severity scores.
Wiz Code shifts security left—scanning IaC templates, Helm charts, and container images in CI pipelines—then blocks risky deployments using the same policy engine that governs runtime. This unified approach prevents policy drift and ensures the pod security standards enforced in production are validated before code merges. By analyzing YAML definitions, Helm charts, and container images before deployment, teams can identify and fix security issues before they impact production nodes and pods.
Comprehensive workload protection spans from container image vulnerabilities in pods to OS-level risks on worker nodes with unified risk prioritization. This unified approach eliminates the need for separate tools for node and pod security while providing consistent policies across your entire Kubernetes estate.
Ready to see agentless, code-to-cloud visibility across nodes and pods—with risk-prioritized, graph-based insights that show real attack paths instead of endless vulnerability lists? Get a demo and see how Wiz turns Kubernetes complexity into action.