Kubernetes and Docker explained: Container orchestration basics

Team di esperti Wiz

What is Docker and how does it work?

Docker standardizes software packaging through containers: A Docker image bundles your application code with the runtime and all necessary system libraries and dependencies. Run the image on any host with the Docker Engine, and you get a portable, reproducible container.

Docker uses a client-server architecture under the hood:

  • The Docker Engine (daemon) manages containers, images, networks, and volumes through a REST API. Under the hood, Docker Engine uses containerd as its high-level runtime and runc as its low-level OCI runtime to execute containers. This layered architecture—Docker Engine → containerd → runc—provides a consistent interface for tools, orchestrators, and automation systems while delegating actual container execution to OCI-compliant components.

  • The Docker CLI is a command-line client that communicates with the daemon over the API, allowing you to build images, run containers, and inspect resources.

  • A Docker Registry is a repository for storing and distributing images. Docker Hub serves as the main public registry, but many organizations rely on their own private registries for better control and security.

For multi-container applications, Docker provides Docker Compose. Compose lets you describe your application’s services, networking, and volumes in a YAML file and then bring everything up with one command. It’s excellent for local development and testing, where simplicity and fast iteration are priorities, while production environments typically benefit from orchestration platforms that add autoscaling and fault-tolerance capabilities.

For production orchestration across multiple hosts, Kubernetes and other platforms extend container workflows with scheduling, scaling, and resilience features. Many teams begin with a simple Docker Compose vs. Kubernetes comparison when deciding how to move from local setups into production-ready orchestration.

Advanced Container Security Best Practices [Cheat Sheet]

After reading this cheat sheet, you'll be able to strengthen container security across build, deploy, and runtime stages using battle-tested techniques.

What is Kubernetes?

As containerized workloads scale from one machine to many, Kubernetes provides a platform for coordinating deployment, scaling, and operations across clusters. Kubernetes is an orchestration platform that takes care of deploying, scaling, and running containerized applications across multiple nodes in a cluster. In a Kubernetes cluster, the control plane watches the desired state you declare in YAML manifests and continually works to match that state. 

For engineers searching how Docker-built images run at scale, Kubernetes serves as the orchestration platform that schedules containers across multiple nodes, manages their lifecycle, and handles service discovery. Kubernetes takes OCI-compliant images (built by Docker, Buildah, or other tools) and runs them as pods across a cluster of machines.

Key Kubernetes capabilities include:

  • Scheduling and scaling: Kubernetes handles scheduling by assigning pods–the basic units that run one or more containers–to worker nodes based on available resources. It can also scale applications automatically by increasing or decreasing the number of pod replicas in response to real-time metrics.

  • Service discovery and load balancing (LB): Kubernetes Services (think ClusterIP, NodePort, and LB) provide stable network identities and abstract access to pods.

  • Self-healing: Controllers monitor pod health and automatically re-create failed containers or reschedule them on healthy nodes, ensuring high availability.

  • Declarative configuration: In Kubernetes, you describe how your applications and resources should look, and the system constantly checks and updates the cluster so that the real state matches what you declared. This model naturally supports GitOps workflows.

Why does container orchestration matter?

Orchestration is crucial: Modern applications are distributed systems built from microservices. Without a platform coordinating resource allocation, discovery, scaling, and recovery, teams end up hand-crafting fragile plumbing. 

Orchestration also improves security and visibility by standardizing APIs; providing central policy enforcement; and appending metadata about relationships among workloads, identities, and network exposure. This detailed metadata lets you correlate vulnerabilities and misconfigurations into real attack paths. 

Docker vs. Kubernetes: Understanding the key differences

Docker is designed for building and running containers on a single machine, while Kubernetes coordinates those containers across multiple machines. Here’s a closer look at their key differences:

AspectDockerKubernetes
Primary functionProvides container runtime, image building, and lifecycle managementOrchestrates containers across clusters with scheduling, autoscaling, rollbacks, and self-healing
ComplexitySimple installation on a single host; basic resource constraints via docker run flagsComprises multiple control-plane components, networking plugins, and declarative objects
ScalabilityManual scaling through Docker CLI or Swarm; suited to small deploymentsBuilt-in autoscalers for pods and clusters; ideal for multi-node, multi-cloud, and large-scale deployments
NetworkingBasic container networking driversUses CNI plugins and DNS-based service discovery for robust networking

These tools complement rather than replace each other. In most cases, you’ll create images in Docker or another OCI-compliant builder, and then Kubernetes will use a container runtime to run those images in pods. 

When teams compare Kubernetes vs. Docker vs. OpenShift, it’s helpful to remember that Docker typically focuses on building images, Kubernetes focuses on orchestration, and OpenShift builds extra tooling on top of Kubernetes.

Kubernetes is the most widely adopted container orchestrator, and it coexists with other tools that offer different orchestration models and trade-offs. Tools like Docker Swarm offer alternative scheduling models and abstractions.

Security implications of Docker and Kubernetes in production

Docker risks and best practices

Docker’s ease of use makes it straightforward for teams to adopt security best practices such as vulnerability scanning and using minimal base images early in the development lifecycle. As with any software supply chain, using verified and well-maintained images is important. Public registries offer convenience and breadth, and teams should complement them with image verification and provenance controls to ensure trusted builds.

To mitigate risks…

  • Use minimal base images so your production containers have only what they need and nothing more. Removing unnecessary packages dramatically reduces your attack surface.

  • Use Docker multi-stage builds to keep build-time tools out of your production images. 

  • Leverage the USER directive in your Dockerfile to switch to a non-root user. 

  • Sign images with Docker Content Trust (Notary) or Sigstore Cosign to verify image provenance. Enforce signature policies at deployment time using Kubernetes admission controllers (like Kyverno or OPA Gatekeeper) that reject unsigned or unverified images, ensuring only approved images run in production clusters.

  • Integrate vulnerability scanning into your pipeline to catch known issues in images and dependencies before they reach production.

  • Avoid running containers in privileged mode and drop Linux capabilities you don’t need.

Kubernetes risks and best practices

Kubernetes’ distributed architecture enables scalability and resilience, and it benefits from thoughtful configuration and policy enforcement to maintain a strong security posture. Key risks include:

  • Misconfigured RBAC: Granting broad privileges to users or service accounts can lead to unauthorized actions or lateral movement.

  • Supply chain attacks: Unverified images and compromised dependencies can inject malicious code into workloads. 

  • Exposed API server: The API server is the heart of the control plane. Protecting the Kubernetes API server with strong authentication, authorization, and rate limiting helps ensure secure and reliable cluster operations.

  • Insecure workload configurations: Pods running with excessive privileges (like containers running as root, pods with unrestricted capabilities, or pods with a writable file system) are easier to compromise. Addressing these configurations early helps reduce the risk of container escapes and strengthens overall workload isolation.

Follow these best practices to shrink your attack surface:

  • Store Kubernetes manifests and infrastructure code in version control: Use pull requests and pre-commit hooks to catch misconfigurations and hard-coded secrets and enforce separation of duties.

  • Implement health checks using three probe types: Leverage readiness probes (gate traffic until pod is ready), liveness probes (restart unhealthy pods), and startup probes (allow slow-starting applications to initialize without premature restarts). For example, a Java application might need a 60-second startup probe to complete initialization before liveness checks begin.

  • Collect and analyze logs: Enable Kubernetes audit logging, and regularly review logs for suspicious API calls.

  • Keep workloads lean and isolated: Minimize container images, and separate environments into namespaces with resource quotas and access controls.

Practical security examples

The examples in this section demonstrate how teams can enforce safe defaults in both Kubernetes- and Docker-based environments.

The Kubernetes example below shows how to minimize privileges inside a pod by enforcing a non-root user, dropping all Linux capabilities, disabling privilege escalation, and using a read-only root filesystem, while still defining sensible resource requests and limits.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: secure-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: secure-app
  template:
    metadata:
      labels:
        app: secure-app
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
      containers:
        - name: app
          image: myregistry/secure-app:1.0
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            capabilities:
              drop: ["ALL"]
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
            limits:
              cpu: "300m"
              memory: "256Mi"

The Dockerfile below uses a multi-stage build to keep the production image small, removes unnecessary build tooling from the final image, and drops privileges by switching to a dedicated non-root user.

#build
FROM golang:1.25-alpine AS builder

WORKDIR /src
COPY . .
RUN go build -o app

#prod
FROM alpine:3.20
RUN adduser -D appuser
USER appuser

WORKDIR /app
COPY --from=builder /src/app .

#read-only filesystem and no shell in final image
CMD ["./app"]

Common challenges and best practices for containerized environments

Complexity and learning curve

Kubernetes’ declarative model and rich set of components provide flexibility and power, and teams typically invest time in learning its concepts to fully benefit from its capabilities.

Solution: Investing in training and using managed services can reduce the burden. 

Observability and monitoring

Containers are short-lived and dynamic. Without robust logging, metrics, and tracing, it’s hard to detect failures or performance issues. 

Solution: Use Prometheus, Grafana, and distributed tracing tools to gain end-to-end visibility into your workloads.

Resource management and cost control

Containers can scale rapidly across nodes. Without clear resource requests and limits, you risk noisy neighbors, CPU and memory contention, and oversized clusters that drive up costs.

Solution: Define resource requests and limits for each workload, enable autoscaling, right-size node pools, and monitor utilization to keep performance and spend in balance. 

When to use Docker, Kubernetes, or both together

Whether you use Docker, Kubernetes, or both largely depends on the size and complexity of your workloads.

Use Docker alone for local development, CI integration, quick functional tests, and small single-host services or short-lived batch jobs. Docker Compose keeps multi-service dev environments simple.

Use Kubernetes for distributed systems that need high availability, resilience, automated scaling, zero-downtime rollouts, and strong multi-tenant isolation. Managed offerings (GKE/EKS/AKS) reduce operational overhead, but you’ll still need to design manifests, policies, and upgrade plans. 

In practice, most teams use both—building images with Docker, then running them in Kubernetes using containerd or CRI-O. This way, developers get fast build-run cycles, and operations gets robust orchestration and predictable operations.

How Wiz secures containerized workloads across Docker and Kubernetes

Securing modern containerized environments requires visibility across the entire lifecycle—from the first line of code to running workloads. Enter Wiz

Wiz provides a complete code-to-cloud and cloud-to-code approach that covers both Docker images and Kubernetes clusters. Here are just some of Wiz’s industry-leading security features:

  • Agentless scanning for images and clusters: Wiz connects to container registries and cloud environments and scans Docker images and Kubernetes configurations for vulnerabilities, misconfigurations, and secrets—no agents required. For runtime threat detection, an optional lightweight eBPF-based sensor monitors system calls and network activity without impacting performance.

  • Security Graph correlation: Wiz’s Security Graph correlates findings across layers, linking vulnerabilities in a Docker image to misconfigurations in Kubernetes manifests and public cloud exposures. Armed with this graph-based context, teams can identify toxic combinations at a glance. 

  • Shift-left scanning with Wiz Code: Wiz Code integrates into CI/CD pipelines to analyze Dockerfiles and Kubernetes manifests before deployment, flagging insecure patterns—such as missing USER directives, the use of privileged containers, and exposed secrets—so that developers can remediate issues early. 

Figure 1: Wiz traces workloads in the cloud to source code repositories and development teams
  • WizOS hardened base images: Wiz provides a library of near-zero-CVE base images built with glibc. These images ship with signed releases, software bills of materials (SBOMs), and remediation SLAs—significantly reducing the base image patching burden (typically 80–90% fewer CVEs than standard base images) while maintaining compatibility with Docker and Kubernetes runtimes.

Ready to see firsthand how Wiz can keep your containers safe? Request a demo today.

See for yourself...

Learn what makes Wiz the platform to enable your cloud security operation

Per informazioni su come Wiz gestisce i tuoi dati personali, consulta il nostro Informativa sulla privacy.

FAQs about Kubernetes vs. Docker