Kubernetes API limitations in finding non-standard pods and containers
Gain a deeper understanding of why it's essential to monitor non-standard pods and containers, including static pods, mirror pods, init containers, pause containers, and ephemeral containers within your Kubernetes environment.
The adoption rate of Kubernetes is a testament to the benefits it provides to CI/CD pipelines, such as faster build and deployment times. According to the Cloud Native Computing Foundation, 96% of organizations are either using or considering Kubernetes. The latest Wiz State of the Cloud report also highlights that major cloud providers’ revenue in Q3 2022 increased by 20%, suggesting that continuous growth in Kubernetes adoption is expected.
As pods and containers continue to gain popularity within organizations' ecosystems, it is crucial for security teams and developers to recognize that the Kubernetes API has certain limitations when it comes to monitoring and listing specific pod and container types. These limitations may arise because of design considerations, control plane configuration settings, or simply a lack of familiarity with relevant nuances. Moreover, it is essential to recognize that attackers may exploit these API limitations to gain unauthorized access, escalate privileges, and execute malicious activities within the containerized environment with greater chances of evading detection.
Standard pods can be monitored and tracked by interrogating the Kubernetes API for existing workload resources such as pods, deployments or by directly querying pods via kubectl get and describe commands. Standard containers can usually be identified in the spec.container key when querying the Kubernetes API. But is that enough to maintain visibility into all pods and containers?
In this blog post, we will try to shed some light on different kinds of pods and containers in Kubernetes clusters, including static pods, mirror pods, init containers, pod infra (pause) containers, and ephemeral containers. We will argue that depending solely on the Kubernetes API to ascertain the components running within your K8s environment is not comprehensive enough given its design and capabilities prevent it from providing a full picture.
Finally, we will further investigate these non-standard pods and containers by examining how they are created and how to find them in your environments using the Kubernetes API where applicable. Understanding the purpose and usage of these specific pods and containers will enable you to identify anomalous activity, such as when attackers mimic these Kubernetes objects to blend into their environment.
Different kinds of pods
Pods are the logical units that encapsulate one or more containers in Kubernetes environments and are the smallest objects that can be directly managed by Kubernetes. In Kubernetes, the pod object can be defined directly or be embedded in broader deployment models like Deployments, DaemonSets and CronJobs. Containers in pods are defined from a base image. Even though two pods may include the same containers with the same images, there are other attributes assigned to them that create differences such as system resources, mounted filesystems, and the permission-related privileged and hostpidsecurityContext flags.
Since these pods are created and defined with the Kubernetes API, it is possible to query the API for them. Often this is done via the kubectl get pods command but each relevant Kubernetes instance can also be queried with the kubectl describe object-type instance-name command that will provide information about the embedded pod.
While this holds true for standard pods that are managed by the Kubernetes API, there are pods that are not managed by the API and therefore are unfamiliar to it without some external help. Let’s examine such a case and determine how the pod could be reported via the API.
Static pods are pods that are managed by the kubelet, the primary “node agent” that runs on each node in a cluster, rather than the Kubernetes API like standard pods. Static pods are often used to bootstrap the Kubernetes control plane itself and its internal services (e.g. clusters, nodes, the API server in kubeadm). Because they are managed by the kubelet, they cannot refer to other Kubernetes objects such as secrets, config maps, and service accounts.
Creating static pods
To create a static pod, the kubelet must be configured to accept static pod manifests on its invocation. This can be achieved either by specifying the relevant fields in the config file or by invoking it with dedicated command-line arguments that designate the location of the desired static pod manifests. The kubelet can look for static pod manifests either with the --pod-manifest-path and --manifest-url command-line arguments in a local path or a web-hosted location, or in the Kubelet config file under staticPodPath and staticPodURL.
The location of the kubelet config file may vary and can be specified at launch using the config command-line argument flag.
One of the easiest ways to identify the kubelet config location is by inspecting the command line:
On a default Google Kubernetes Engine (GKE) setup the kubelet config path is configured at /home/kubernetes/kubelet-config.yaml to monitor the /etc/kubernetes/manifests path.
The kubelet automatically identifies the new manifest in the path and runs the static pod. Using kubectl, we can see that the static pod is up and running:
If this pod is not managed by the Kubernetes API, how is it able to be listed?
Since the Kubernetes API does not generate static pods, it is unaware of their existence. The kubelet, however, can be configured to report them to the control plane using a dedicated object called a mirror pod.
Mirror podsare objects generated by the kubelet to represent static pods on the control plane. To do so, the kubelet must be configured to report static pods and authorized to create mirror pods on the control plane. This means that depending on your control plane setup, mirror pods might not be enabled and therefore may not be visible to Kubernetes administrators, developers, and monitoring products that rely only on the API.
Spotting static pods
Once your kubelet is configured to report static pods via mirror pods you can find them with the ownerReferences.kind attribute which indicates the Node/node-name.
Given static pods are managed by the kubelet, malicious actors that gain access to the static pod manifest path or URL could leverage them for Escape to Host attacks by creating powerful privileged static pods on the node. Attackers who gain access to the node can also abuse static pods for stealthy persistence as they may not be reported or deemed safe; for example, they can add containers to the spec of existing static pods like kube-proxy in GKE.
Different kinds of containers
Containers are instances of a specific container image that bundle layers of software, including all its requirements and dependencies needed to function. Containers’ main advantage is that they are portable from one computing environment to another as the container runtime is all that is needed to run them.
Spotting standard containers
In Kubernetes, a container must be included in a higher-level object such as a Pod, DaemonSet, etc. To list existing containers in a K8s environment, you must often query these higher-level objects with the kubectl get and kubectl describe commands. These “standard” containers are listed in the containers key.
kubectl get pods -o json | jq '.items.spec.containers.name'
Now that we’ve covered how to examine the state of containers for existing pods, we will look at other container types. These containers can either be listed in other sections of the Kubernetes manifest in the spec key, listed outside of the spec key, or simply not listed at all; they include init, pod infra (pause), and ephemeral containers.
Init containers are designed to run to completion before the main application containers will start. Init containers enable performing pod-level setup tasks for the main application containers in that pod. Init containers differ from standard containers in their available resources and their application. Since it is possible to use multiple init containers, they also have group-like shared resource requests and limits.
Common use cases for init containers include downloading configuration files, preparing databases, and delaying the launch of application containers.
Considering init containers oversee the bootstrapping and setup stages of standard containers, threat actors may use them to contaminate the setup (e.g. gaining persistence). Init containers should therefore be monitored as well despite being frequently overlooked.
Creating init containers
Init containers are defined in a pod’s specification via the spec.initContainers array field. When a pod is starting, Kubernetes will run each init container in the order in which they are defined in the pod's configuration file.
The status field reflects the containers’ execution when a pod is queried:
Spotting Init containers
Init containers can be identified by their dedicated initContainers key.
Although init containers are intended to run first and before any main container in a pod, they are not actually the first containers to run. The first containers are the “pod infra” containers, commonly known as “pause” containers, which will be described in the next section.
Pod infra a.k.a. pause containers
The purpose of the pod infra container in Kubernetes is to provide a placeholder for system resources assigned to the pod such as cgroups and namespaces. This container remains active even when there are no other containers running in the pod, like in the transition from init to standard containers.
The most frequently used image for pod infra containers is the pause container image. This image is usually an exceedingly small container image that registers a few signal handlers and invokes the pause system call that suspends the calling thread until any of the registered signals are raised. Note that cloud service providers often modify this image to suit their needs.
The pod infra container’s image is defined by the kubelet either via the pod-infra-container-image command-line argument or the podInfraContainerImage flag in the kubelet configuration file.
Because the kubelet manages this type of container, the Kubernetes API is unaware of its existence. Unfortunately, there currently is no mechanism in place for the kubelet to report on these containers.
The API’s lack of visibility, coupled with the automatic execution of pause containers upon pod/container creation, enables stealthy persistence for any threat actor modifying these containers with an attacker-controlled image. This is reinforced by their attachment to every pod and their ability to inspect a pod’s content, traffic, and data. Let's walk through an example of how this kind of persistence can be achieved.
Gaining stealthy persistence by modifying the pause container
First, we identify the current pod infra container that is specified by inspecting the kubelet daemon on the AWS EKS node. This requires node-level access or a pod that allows movement to the node.
"10-kubelet-args.conf" reveals the pod-infra-container-image flag indicating the AWS self-hosted version of the pause container.
To replace the existing pause image, the new image must specify a default command in its Docker file, and this intended process should be long-lived by default. In this example an nginx image is suitable so we change the conf file:
After changing the kubelet daemon setting, the system daemon config must be reloaded. Given the kubelet daemon is already running, we will also restart it.
systemctl restart kubelet
Deploying any new container should trigger nginx as our new pause container.
An nginx container is finally created from within the node as part of pause-test-pod; additional nginx instances will be created with every subsequent pod deployed on that node. As a pause container, nginx can be listed by directly querying the container runtime (Docker in our case) on the node.
Spotting pause containers
Pause containers are hidden by design and are not visible or accessible to Kubernetes users or administrators via K8s clients such as kubectl. They are also not visible to cloud providers for the same reasons. Identifying currently running pause containers therefore requires node-level access and the ability to query the container runtime.
So far, we have seen various kinds of containers that can be created by declaring a specific spec and then generating an instance of that spec. Now we will explore a container that can be attached to an already running pod: the ephemeral container.
Ephemeral containers are mainly used for pod debugging purposes with the intent to lower the necessary footprint and complexity of the debugging process. Their main advantage is that they can be dynamically added to an existing pod. By working directly on a running pod, you can benefit from analyzing the actual resources, memory state, filesystem state, etc. However, this means they do not have explicit assurances or guarantees regarding resource allocation or execution conditions. They may not be allocated specific amounts of CPU or memory resources, and their resource usage may vary depending on the overall resource availability within the pod or cluster.
Creating ephemeral containers
Ephemeral containers can be created by simply using the kubectl debug command:
It is also possible to create ephemeral containers by creating a copy of a pod.
Spotting ephemeral containers
Ephemeral containers can be identified via their dedicated ephemeralContainers key.
The fact that ephemeral containers are attached to running containers may be useful to malicious actors since this facilitates gaining access to their data and secrets.
In this blog post we covered various kinds of pods and containers and discussed how to identify them with the Kubernetes API or with node-level access. However, we also saw that the Kubernetes API is limited when it comes to consistently reporting on their existence, in fact, it sometimes omits them by design. Threat actors may leverage these types of pods and containers for persistence and stealth as they are often overlooked.
The following table summarizes the API’s ability or inability to list non-standard pods and containers and provides example commands when applicable.
Although the Kubernetes API remains a powerful tool for monitoring, it has its set of limitations and consequently demands a complementary monitoring solution such as workload runtime agents. This is critical when attackers are always looking for new ways to secretly operate in environments without disrupting existing workflows or creating anomalies.
It is imperative for security, DevOps, and development teams to be cognizant of what can and cannot be identified in a Kubernetes environment via its API, and to pay close attention to pods and containers like pod infra that are expected to “just be there”.
Can you monitor static pods in your environment? Can you determine the source of a pause container image, or whether any changes have been made to it? Is anyone allowed to attach an ephemeral container to a pod in your production environment? Is there supposed to be an init container in that deployment? Hopefully after reading our analysis, you have not only become more familiar with the various types of pods and containers and their associated risks, but you are also able to answer the questions above and thereby improve your security posture.
This blog post was written by Oren Ofer from Wiz Research as part of our ongoing mission to analyze threats to the cloud, build mechanisms that prevent and detect them, and fortify cloud security strategies.
See for yourself...
Learn what makes Wiz the platform to enable your cloud security operation
In the earlier posts in this series, we showed not only how to get rid of unused access keys, but also how to minimize risk by applying a least-privilege strategy. In this final post, we’ll at last get into the discussion of alternative solutions to using access keys.
Dynamic linker hijacking via LD_PRELOAD is a Linux rootkit technique utilized by different threat actors in the wild. In part one of this series on Linux rootkits, we discuss this threat and explain how to detect it.
Get a personalized demo
Ready to see Wiz in action?
“Best User Experience I have ever seen, provides full visibility to cloud workloads.”
“Wiz provides a single pane of glass to see what is going on in our cloud environments.”
Adam FletcherChief Security Officer
“We know that if Wiz identifies something as critical, it actually is.”
Greg PoniatowskiHead of Threat and Vulnerability Management