Understanding runtime container scanning fundamentals
Runtime scanning is security monitoring of containers during execution. This means you watch what actually happens inside the container runtime instead of only checking what is baked into the image.
Build‑time scanning is different. It scans images in CI/CD to find known CVEs and misconfigurations before you deploy.
Runtime scanning answers a critical question: 'What is runtime security for containers?' It focuses on detecting live behaviors, active threats, and anomalies that only appear when containers execute under real production traffic.
Build‑time scanning: Looks at image contents, packages, and Dockerfiles before production.
Runtime scanning: Watches live processes, network calls, and file activity while containers run in production.
You need both. Build-time scanning stops some bad images from shipping. Runtime scanning catches active threats and behavioral anomalies that only show up under real traffic. A modern approach pairs agentless cloud posture assessment with lightweight eBPF node sensors for runtime, so teams see both what's deployed (configurations, permissions, vulnerabilities) and how it behaves under real production workloads. This unified view eliminates blind spots and reduces tool sprawl.
Take the 10-Minute Wiz Container Security Tour
In this short interactive tour, you’ll follow a real-world scenario where Wiz identifies exposed containers, visualizes the full attack path, and fixes the issue directly in code—all within minutes.

Agentless vs sensor-based runtime monitoring
Understanding the difference between agentless and sensor-based approaches helps you choose the right architecture:
Agentless cloud posture scanning:
What it covers: Cloud configurations, IAM policies, storage permissions, network topology
How it works: API-based scanning of cloud control plane without deploying agents
Pros: Zero deployment overhead, no performance impact, instant coverage
Cons: Cannot see runtime behavior, syscalls, or process activity
Best for: CSPM, CIEM, DSPM, configuration compliance
Sensor-based runtime detection:
What it covers: Process trees, syscalls, network connections, file activity, container behavior
How it works: eBPF sensors deployed as DaemonSets on each node
Pros: Deep visibility into live threats, behavioral analysis, forensic evidence
Cons: Requires deployment, minimal performance overhead (typically <5% CPU)
Best for: Runtime threat detection, incident response, behavioral baselines
The optimal approach: Combine both. Use agentless scanning for posture and configuration, plus lightweight eBPF sensors for runtime behavior. This gives you comprehensive coverage without heavy per-container agents.
The container threat model at runtime
A container threat model is a simple map of how attackers can abuse your running containers and what could break if they succeed. You want to think about the main runtime risks:
Container escape: Code breaks out of the container runtime and reaches the host OS.
Privilege escalation: Code inside the container gains more rights than intended, for example through a weak service account.
Lateral movement: An attacker uses one compromised pod to pivot into databases, queues, or other services.
These are core parts of your container threat model. Multiple industry reports document material business impact from container security incidents, including revenue loss, compliance penalties, and customer trust erosion. Runtime scanning helps you detect early signs of container escape, privilege escalation, and lateral movement before they escalate into full incidents.
Why traditional tools struggle with ephemeral workloads
Ephemeral workloads are containers and pods that are created and destroyed all the time. Traditional endpoint tools were designed for long‑lived VMs and laptops, so they struggle here.
Agents may not install or initialize before a short‑lived container finishes.
Inventory gets stale quickly because pods move across nodes.
Tools that only understand “servers” cannot easily map alerts back to deployments, namespaces, or services.
This is why you need runtime container security that understands Kubernetes and other orchestrators. It has to follow pods, not just static hosts.
Shared responsibility for container runtime security
Container runtime security is a shared responsibility model. No single team can own it alone.
Cloud providers secure the physical hosts, hypervisor, and managed control planes.
Platform and DevOps teams secure clusters, nodes, base images, and runtime config like RBAC and network policies.
Developers secure application code, dependencies, and how secrets and permissions are used at runtime.
Security teams design runtime security strategy, tune detections, and lead response.
When this works well, each group understands their slice of the shared responsibility model and how their choices affect the container runtime.
Advanced threat detection and behavioral analysis
Once you have basic runtime visibility, you want to detect not just events, but strange events. That is where behavioral analysis comes in.
Behavioral baselines for containers
A behavioral baseline is a simple description of what “normal” looks like for a workload. You build it by watching the container runtime over time.
For example, you can learn:
Which processes normally run in this pod.
Which domains and ports it usually talks to.
Which files or mounted volumes it typically touches.
Then you treat deviations as signals. A new process tree, a sudden outbound connection, or a new file path can all be hints of compromise.
System call monitoring and eBPF sensors
System call monitoring means watching low‑level kernel calls like open, execve, and connect. These calls tell you exactly what processes are doing.
eBPF sensors let you do this safely from the kernel without heavy agents or custom kernel modules.
System call monitoring: Shows you process behavior with fine detail.
eBPF sensors: Attach to the kernel and observe container runtime events with low overhead.
This combination is a strong base for runtime container security and Kubernetes runtime security best practices. eBPF sensors capture syscalls, process trees, and network activity at the kernel level, giving you visibility into container behavior without modifying images or injecting per-container agents.
Using machine learning to spot unknown attacks
Machine learning can help you spot zero‑day attacks and unknown patterns. Instead of matching signatures, models learn normal behavior and flag outliers.
You might use ML to notice:
A pod suddenly reaching external IPs it never contacted before.
A sidecar container reading secrets it never touched in the past.
A process pattern that matches known “reverse shell” behavior even on a new binary.
These are classic signals for container threat detection. You do not need to know the CVE number to see that something is wrong.
Correlating runtime signals with cloud context
Runtime signals get much more useful when you add cloud context. You want to know not just “what happened” but “how bad is it if this pod falls?”
Is the pod internet‑exposed or behind several layers of network control?
Does its identity have write access to critical data stores?
Is there a path from this pod to sensitive buckets, queues, or AI models?
When you connect runtime events to identities, network topology, and data locations, you can see full attack paths and estimate blast radius. With a security graph that links pods, identities, networks, and data stores, teams can prioritize based on real impact. For example, a suspicious process in an internet-exposed pod with write access to customer databases gets immediate attention, while the same process in an isolated dev namespace gets lower priority. This graph-based context helps teams fix the highest-impact issues first instead of chasing every alert.
Container Security Best Practices [Cheat Sheet]
This cheat sheet goes beyond the no-brainer container security best practices and explores advanced techniques that you can put into action ASAP. Use this cheat sheet as a quick reference to ensure you have the proper benchmarks in place to secure your container environments.
DownloadPerformance optimization for runtime scanning
You care about security, but you also care about performance. Good runtime scanning respects both.
Prefer lightweight instrumentation
Lightweight instrumentation means you observe the container runtime without bloating it. Here, node-level eBPF instrumentation (typically deployed as a Kubernetes DaemonSet on each node) is often a better choice than per-container agents. The DaemonSet approach provides cluster-wide visibility with a single sensor per node, reducing operational complexity and resource overhead.
eBPF instrumentation: Gives you node‑level visibility and process details without changing container images.
Traditional agents: Often require injection into each container and can be harder to maintain at scale.
For many teams, using node‑level eBPF collectors is the cleanest way to cover large clusters with minimal friction.
Kernel and platform prerequisites
eBPF-based runtime scanning requires specific kernel and platform support:
Linux kernel version: 4.14 or higher for basic eBPF; 5.8+ recommended for full CO-RE (Compile Once, Run Everywhere) support
Managed Kubernetes: EKS (Amazon Linux 2), GKE (Container-Optimized OS), AKS (Ubuntu 18.04+) all support eBPF by default
Windows containers: eBPF is Linux-only; Windows containers require alternative instrumentation (ETW, Sysmon)
Bottlerocket and Flatcar: Fully supported with read-only root filesystems
Kernel modules: eBPF does not require custom kernel modules, reducing operational risk
Check your kernel version with uname -r and verify eBPF support with bpftool feature before deploying sensors.
Use sampling in high‑volume environments
In very busy clusters, you do not need full detail on every event. You can sample normal traffic and still keep full detail for suspicious activity.
You can, for example:
Collect full traces only when rules or models see something odd.
Sample low‑risk namespaces at a lower rate.
Turn up collection temporarily during an investigation.
These sampling strategies keep runtime container protection affordable and fast, especially in large deployments.
Focus deep scanning on critical workloads
Risk‑based prioritization means you tune runtime scanning depth based on impact. You go deeper on what really matters.
Internet‑exposed services.
Pods with powerful roles or access to sensitive data.
Control plane add‑ons and CI/CD agents.
By doing this, you put your strongest cloud runtime security controls around the workloads where failure would hurt the most.
Control resource use with limits and throttling
You should treat runtime security components like any other workload. Give them clear resource budgets.
Set CPU and memory limits for collectors and analysis pods.
Use throttling to keep bursty event streams from starving application pods.
This keeps performance stable and prevents your security layer from becoming its own outage risk.
Runtime incident response and forensics
Finding a runtime issue is only step one. You also need a fast, repeatable way to investigate and respond.
Capture evidence from ephemeral workloads
Ephemeral workloads can vanish before you even get an alert. You need container forensics that preserve key evidence.
Container forensics: Collect process trees, network connections, and file activity tied to each container.
Keep short‑term history so you can reconstruct what happened even after the pod is gone.
This helps you answer simple but important questions: What ran? What changed? What did it talk to?
Automated containment playbooks
Containment playbooks are predefined actions you take when something bad happens. Automating them saves time when every second counts.
Examples include:
Restarting or quarantining a single deployment.
Cutting egress from a namespace showing suspicious connections.
Rotating secrets or tokens used by the compromised pod.
These containment playbooks reduce blast radius while you investigate the full scope.
Investigation graphs for clearer stories
An investigation graph is a visual map of how an incident unfolded. It connects runtime events with resources and identities. An investigation graph that auto-stitches detections, cloud events, and runtime signals helps analysts jump straight to root cause instead of manually correlating logs across multiple tools. For example, the graph might show: suspicious process → launched by pod X → using service account Y → with write access to bucket Z → containing customer PII. This complete story accelerates triage and remediation.
Investigation graphs: Show which pod did what, on which node, with which permissions, and which data stores it touched.
This gives your team and your stakeholders a clear story instead of a wall of logs.
Event retention and compliance
Runtime event retention means you keep enough history to support audits and deep investigations. You do not need to store everything forever, but you should have a clear policy.
Define how long you keep key runtime security events.
Make sure retention supports your internal needs and external regulations. For example, PCI DSS requires at least three months of immediately available audit logs and one year of archived logs. HIPAA requires six years of audit trail retention. Define your retention policy based on the most stringent framework you must satisfy, then automate enforcement through your runtime security platform.
This also supports runtime detection engines that use historical event data to improve signal quality, refine behavioral baselines, and reduce false positives over time.
How to implement CI/CD security scanning: Best practices
CI/CD security scanning is the practice of adding automated security checks into your build and deployment pipelines. This means every meaningful code change is tested for risk before it can reach production.
Read moreIntegration strategies for comprehensive security
Runtime scanning really shines when it is wired into your wider security and delivery ecosystem. You want runtime data to flow to the people and systems that can act on it.
Connect runtime scanning with CI/CD
CI/CD integration lets you trace production incidents back to source code. You can then fix the root issue, not just the symptom.
Link runtime alerts to image tags, commits, and pipelines.
Open issues directly for owning teams with clear context.
This turns runtime security into a feedback loop for developers instead of a separate “security thing.”
Feed runtime events into your SIEM
A SIEM is your main hub for security logs and alerts. Runtime scanning should plug into it.
SIEM correlation: SIEM correlation lets you see container runtime events alongside identity logs, network traffic, and application logs in a unified view. This enables your SOC to correlate a suspicious container process with the cloud identity that launched it, the network destinations it contacted, and the application context that triggered the behavior.
This gives your SOC one place to understand the full impact of a threat, instead of triaging in separate tools.
Tie runtime findings into vulnerability management
Not every vulnerability is equal. A finding that is actively exploited at runtime is far more urgent than one that never runs.
Flag vulnerabilities that match ongoing runtime attacks.
Raise priority for workloads where runtime behavior and misconfigurations intersect.
This is where vulnerability correlation pays off. You reduce noise and focus on the combinations that form real attack paths.
Enforce learnings with admission controllers
Admission controllers (such as Pod Security Admission, OPA Gatekeeper, and Kyverno) are Kubernetes components that validate or reject pod specifications before they start. They are a good place to enforce lessons learned from runtime incidents. For example, if runtime scanning reveals that containers with CAP_SYS_ADMIN frequently become attack vectors, you can configure Gatekeeper to block any new pod requesting that capability unless explicitly approved.
You can, for example:
Block images that violate hardened runtime policies.
Deny pods that request dangerous privileges unless explicitly approved.
This closes the loop between runtime security and future deployments and keeps your overall container runtime security posture steady.
AI runtime security in the cloud: Secure your AI when it matters most
AI runtime security safeguards your AI apps, models, and data during active operation. Going beyond traditional security’s focus on static pre-deployment analysis, runtime security monitors AI behavior at inference while it actively processes user requests and sensitive data.
Read moreHow Wiz enhances runtime container security
Wiz is built to give you full-stack, context-rich runtime security without making you stitch together many tools. It covers everything from the node kernel to the cloud control plane. Combining agentless posture assessment with runtime detection and code-to-cloud traceability reduces tool sprawl and speeds joint Sec/DevOps remediation. Instead of toggling between separate CSPM, CWPP, CIEM, and vulnerability tools, teams work from a single graph showing how risks connect across the entire cloud environment.
Wiz Defend uses lightweight eBPF-based node sensors (deployed via DaemonSet) to capture deep runtime activity without heavy per-container agents. Each node runs a single sensor that monitors all containers on that node, reducing operational overhead while providing comprehensive syscall, process, and network visibility.
The Wiz Security Graph connects runtime alerts with misconfigurations, identities, vulnerabilities, and data locations to show full attack paths.
Attack path analysis highlights toxic combinations where runtime threats meet excessive permissions and risky configs.
Code‑to‑cloud traceability lets you jump from a runtime incident straight back to the image, pipeline, and source code that introduced it.
WizOS hardened base images give you near‑zero CVE starting points, shrinking the attack surface before containers even launch.
Together, these pieces turn runtime container scanning into a clear, prioritized, and developer‑friendly security practice.
See runtime container security in action
Ready to prioritize real runtime risk and cut MTTR? See how graph-driven context and eBPF runtime sensors work together in a live environment.