Serverless Security Explained

Equipo de expertos de Wiz
Main takeaways about serverless security:
  • Serverless security is about protecting workloads where the cloud provider manages the infrastructure, but you still own the code and configuration risks.

  • Serverless brings benefits like reduced operational overhead and built-in isolation, but also introduces new attack surfaces and permission challenges.

  • Common threats include event injection, over-privileged functions, supply chain risks, and billing attacks.

  • Best practices include granting minimal permissions, using API gateways, scanning code early, and monitoring all activity.

  • Wiz provides agentless, full-stack visibility and runtime protection for serverless environments, helping you prioritize and remediate real risks.

Serverless computing: A refresher

Serverless computing is a cloud execution model where developers write and deploy code without managing underlying infrastructure. Cloud providers automatically handle server provisioning, scaling, and maintenance.

This approach eliminates traditional infrastructure management tasks, including security patches for operating systems and runtimes. However, serverless doesn't mean vulnerability-free—new security challenges emerge that require specialized approaches.

Security remains a concern in serverless computing. Indeed, a 2023 Cloud Security Alliance report revealed that over 70% of companies still lack dedicated serverless security controls. The shared responsibility model of cloud services, including serverless computing, shifts some security responsibilities to the cloud service provider but doesn't eliminate all security concerns for the client.

This article explains serverless security, introduces common security threats for serverless applications, and gives actionable advice about preventing them. Ready to make the most of serverless architecture? Let's dive in.

Uncover Vulnerabilities Across Your Clouds and Workloads

Learn why CISOs at the fastest growing companies choose Wiz to secure their cloud environments.

Para obtener información sobre cómo Wiz maneja sus datos personales, consulte nuestra Política de privacidad.

What is serverless security?

Serverless security is the discipline of protecting event‑driven, function‑based workloads and the managed cloud services they call by securing your code, configurations, identities, data, and runtime behavior.

In the shared responsibility model, the cloud provider patches and operates the infrastructure (OS, runtimes, scaling). You are responsible for application logic and dependencies, permissions and access paths, secrets and data protection, service configurations, and every event trigger.

Because serverless is ephemeral and highly distributed, effective serverless security centers on:

  • Least‑privilege access for every function and service

  • Input and event validation to prevent injection and abuse

  • Secret management and avoidance of unintended state

  • Supply chain hygiene across libraries and CI/CD

  • Network and exposure controls (API gateways, private networking, secure function URLs)

  • Runtime monitoring, logging, and tracing to detect and respond to anomalies

  • Cost and rate safeguards to mitigate denial‑of‑wallet attacks

What benefits do serverless architectures bring?

Serverless architectures offer compelling advantages that drive adoption across organizations:

Flexible managed services

On-demand billing eliminates costs when functions aren't running. You pay only for actual execution time, not idle infrastructure.

FaaS platforms support multiple programming languages without requiring infrastructure management. This lets you build complete backends while avoiding OS maintenance, security patches, and fixed monthly server costs.

Improved security

Serverless environments provide several security advantages:

Managed infrastructure security: Cloud providers handle OS and runtime patching, reducing your security maintenance burden.

Stateless execution: Functions execute in isolation without persistent state, limiting what attackers can access between executions.

Reduced attack surface: Smaller code footprints and purpose-built functions make it easier to implement least-privilege permissions and track potential vulnerabilities.

On-demand billing

On-demand billing is a huge benefit that helps your bottom line. As we've mentioned, you only pay for what you use. If there is no function running or no data in a database, you don't pay for it. Though one serverless execution might be more expensive than execution on traditional infrastructure that's utilized to its full capacity all month, that scenario is rare, so it's a good idea to move the risk associated with idling infrastructure to the cloud provider.

Why do serverless apps require security?

Serverless complexity creates new security challenges despite architectural benefits. Organizations often deploy hundreds of functions across multiple services, creating an expanded attack surface that's difficult to monitor comprehensively.

Each function represents a potential entry point requiring individual permission management. As function counts grow, security complexity increases exponentially, making traditional monitoring approaches inadequate.

Common serverless security threats

There are several reasons a serverless architecture might be susceptible to security vulnerabilities. Here are the seven most common:

1. Increased attack surface

Serverless architectures can consist of dozens, sometimes even hundreds, of small services that form a single application. This poses a risk for multiple reasons:

  • More services mean more cognitive load on the engineers that maintain them. While the managed nature of these services can lighten that load somewhat, issues might slip through the cracks if the number of services reaches a critical mass.

  • You can easily expose each serverless function to the public, creating an entry point into your system. It's critical to keep every serverless function in check so they don't become a liability.

  • Managing permissions for many services can become a full-time job if you don't implement a reasonable process from the outset. Otherwise, giving each function full access is tempting when a deadline looms.

2. Event data injection

Injection vulnerabilities are the bane of every publicly exposed service. (Think JavaScript injections for HTML, SQL injections for relational databases, and event injection for serverless architectures.)

If you integrate user-supplied parts of your event data in commands without sanitation, your app can be susceptible to an injection attack. For example, when you build a script via string concatenation and use a URL or filename from a user input, you're introducing considerable risk.

3. Over-privileged functions

If you follow the principle of least privilege, your application will end up with air-tight permissions. In theory, the permissions a serverless function requires are easier to assess than with monolithic services. After all, functions are small and purpose-built.

However, incomplete knowledge of available permissions or time pressures can lead an engineer to assign more permissions to a function than what's required. This is a common issue, with one analysis finding that 85% of credentials with elevated privileges remain untouched over a 90-day period. And if a function was split because it grew too much, and each of the new functions only needs a fraction of the permissions of their ancestor, people tend to forget to update.

4. Compromised third-party code

While OS and runtime maintenance is your provider's responsibility, you must choose libraries and frameworks and keep your application code up to date. Supply chain attacks are on the rise, and if you rely on third-party dependencies without checking their safety, you might install something that compromises your functions. Wiz's State of Code Security Report 2025 found that 61% of organizations have secrets exposed in public repositories, underscoring the risks of inadequate dependency management.

5. Accidental state

Although serverless functions are commonly considered stateless, that’s not entirely true. When a cold-start happens, the runtime loads your function from scratch, but if your function hasn’t been idle long enough to be evicted and receives another event, the runtime will reuse an already loaded function. Everything you do in the code outside of a handler function will be cached and become a state that could be susceptible to a security vulnerability.

6. Side channel attacks

Using a VPC (a private network inside the cloud without direct public internet access), can lead to the false assumption that all services inside the VPC are secure because nobody can directly access the network from the outside. This assumption can lead people to give internal services more privileges than required. That’s a big problem: An external attacker has many tools in their arsenal, even if they don’t have direct access. And even peered VPCs can be at risk.

7. Billing attacks

Paying on demand is great, but there are downsides. A denial of wallet (DoW) attack is designed to take an app offline by racking up usage charges until the owner has no money to pay for it. According to OWASP, DoW attacks are more of a threat in serverless than DoS attacks.

A few simple serverless security best practices

Effective serverless security requires proactive measures that address the unique challenges of function-based architectures. These seven practices provide comprehensive protection:

1. Grant minimal permissions

Always follow the principle of least privilege. If a function doesn't write to a service, don't give it write access. If a function uses only one bucket, don't give it access to all buckets.

2. Take advantage of API gateways

Don't expose all of your serverless functions directly to the public; use an API gateway instead. That way, you have only one entry point into your application and can manage public access in a central location.

3. Employ command query responsibility separation

Split your functions into reading and writing functions. This makes the code footprint of each function smaller and easier to monitor, and if one of the two is compromised, the others are likely to be unaffected.

4. Scan your code

Follow the shift-left approach and solve your issues early in development. Utilize code security scanners that run in the IDE and your CI/CD pipeline to ensure you catch issues before they hit the cloud.

5. Prioritize monitoring, logging, and tracing

Use monitoring and observability tools and services in production. Otherwise, you’ll have no way to discover what went wrong when you got hacked. Monitoring, logging, and tracing are essential components of ongoing maintenance.

6. Secure function URLs

If API gateways aren’t an option and you must use function URLs, make sure to keep them secure. We’ve published a detailed guide on our blog, so check it out

Pro tip

Lambda function URLs may be simple, but like any other externally exposed resource in your cloud environment, it is important that they be properly secured. While functions residing behind an API gateway or load balancer rely on the secure configuration of these frontend services, function URLs must be independently secured, and misconfigured instances could pose an attractive target for malicious actors hoping to cause damage or gain access to sensitive data.

Learn More ->

7. Leverage agentless scanning

Every sufficiently complex system will include multiple technologies to complete its tasks. You might not be able to go 100% serverless for various reasons. But whether you’re completely serverless or partially serverless, it’s essential to have full visibility into your infrastructure. 

Wiz's approach to serverless security

Comprehensive serverless security requires specialized tools that address both traditional and emerging serverless threats. Wiz provides end-to-end protection for serverless workloads, including containers like AWS Fargate and Azure Container Apps.

The Wiz platform delivers critical serverless security capabilities:

  1. Enhanced Visibility: The Wiz runtime sensor now provides deep visibility into serverless container processes, even without direct host access.

  2. Threat Detection and Response: Real-time detection and response capabilities allow organizations to identify and mitigate threats promptly.

  3. Custom Rule Creation: Users can create tailored rules to detect suspicious processes and network behavior, with the ability to trigger specific response actions.

  4. Runtime Hunting: The sensor monitors all serverless container events, centralizing this data to facilitate proactive threat hunting and simplify investigations.

  5. Vulnerability Validation: Wiz helps prioritize remediation efforts by identifying which vulnerabilities are actually exploitable in the runtime environment.

This extension of Wiz's cloud-native security platform ensures that organizations can maintain robust security measures across their entire cloud infrastructure, from code to runtime, without sacrificing the agility and scalability benefits of serverless architectures.

Uncover Vulnerabilities Across Your Clouds and Workloads

Learn why CISOs at the fastest growing companies choose Wiz to secure their cloud environments.

Para obtener información sobre cómo Wiz maneja sus datos personales, consulte nuestra Política de privacidad.

Frequently asked questions about serverless security