AWS S3 Security Best Practices: A Comprehensive Guide

S3 security best practices main takeaways:
  • S3 security requires a layered approach that combines IAM roles, S3 bucket policies, and network controls, like VPC endpoints, for fine-grained access control.

  • Implement features like S3 object lock, encryption, and replication to prevent both accidental and malicious data loss.

  • Detect suspicious activity and ensure compliance by tracking data access and configurations in real time using tools such as AWS CloudTrail, CloudWatch, and Wiz.

What is S3 security?

Amazon S3 security is the combination of AWS tools and best practices that protect data in your buckets from unauthorized access, misuse, and loss. Organizations must implement proactive defense methods and leverage automated tools to stay ahead of these growing threats.

Securing S3 buckets can be especially challenging for cloud-native applications that run numerous ephemeral workloads. Fortunately, you can easily address these challenges by following proven S3 security best practices and utilizing automated tools, like Wiz, for end-to-end visibility and proactive issue detection.

S3 security and the shared responsibility model

The first step toward mastering S3 security is understanding where Amazon’s responsibilities stop and yours start. AWS uses the shared responsibility model to define its responsibilities and outline customers’ responsibilities.

Simply put, AWS secures the infrastructure, but customers are fully responsible for securing what runs in the cloud, including S3 buckets.

The shared responsibility model

This means you, as a customer, are responsible for controlling who can access and modify your S3 resources. You can achieve this by following the least privilege principle—granting only the necessary permissions for a task and no more. The least privilege principle applies to all the permission systems AWS offers for S3, with specific implementation depending on which systems your team uses. Developers often use one or both of the following approaches to implement least privilege in Amazon S3:

  • Leverage identity and access management (IAM) to assign broader permissions, such as allowing a user read-only access to an S3 bucket.

  • Implement access control lists (ACLs) to apply more granular permissions, such as granting read-only access to a single object within a bucket.

The goal is to limit access as much as possible to reduce risks and improve security.

Profi-Tipp

Always combine the principle of least privilege access with strong encryption. Limiting permissions keeps your sensitive data safe from unnecessary exposure, while encryption protects it even if someone bypasses your permissions. For comprehensive protection, consistently maintain AWS S3 security best practices across all environments—such as development, testing, and production—to avoid leaving weak links attackers can exploit.

Challenges with the shared responsibility model

The shared responsibility model also has its challenges. While Amazon S3 makes file storage simple, this simplicity can create blind spots. Many organizations assume AWS “has it covered” and underestimate their own cloud security responsibilities, overlooking critical security best practices—such as enabling encryption at rest and in transit, enforcing least privilege, audit logging, and blocking public access to S3—which often contribute to issues like data breaches. 

For example, Microsoft faced massive data exposure in 2022 due to misconfigured bucket policies. Research from Wiz shows that teams frequently fail to block public access, with 76% of companies allowing some third-party roles that attackers can exploit to gain complete control over the organization's cloud resources. To prevent such risks, it’s critical to understand and manage your responsibilities in the shared responsibility model.

4 pillars of effective S3 security

To build a strong security foundation for your S3 buckets, focus on these key pillars:

  • Access control: Ensure that only authorized users or services can list, read, or write to your S3 buckets to prevent unauthorized access and data leaks.

  • Encryption and data protection: Encrypt stored data (data at rest) and data in transit over the network to maintain confidentiality and integrity.

  • Network and boundary controls: Limit where and how users and developers can access S3 data across VPCs, network endpoints, and the public internet.

  • Observability and auditability: Review and analyze logs and metrics to detect suspicious activity, investigate incidents, and respond quickly to security issues.

Each of these pillars is vital in protecting the integrity of your S3 bucket, preventing data exposure, and enabling root cause analysis during security events.

S3 security best practices and recommendations

Even a single misconfiguration, like leaving a sensitive bucket publicly accessible, can lead to data breaches, compliance violations, or downtime. To strengthen your Amazon S3 defenses and keep your data safe, implement the following best practices:

Implement least privilege access

Least privilege is a cornerstone of AWS security. For S3, it means granting only the exact permissions a user, application, or service needs—and nothing more. If a service only needs to read objects in one bucket, scope its IAM role to that bucket and limit it to s3:GetObject. Avoid granting broad permissions, like s3:* or access to all buckets, unless absolutely necessary.

Use IAM policies with resource-level permissions, and consider adding conditions (for example, restricting access to a specific VPC or IP range). Enable tools like IAM Access Analyzer or AWS Config to detect overly permissive policies.

The worst-case scenario for a least privilege access implementation is granting a service admin access to your S3 buckets. If hackers compromise that service, they can read, change, or delete all your data.

Profi-Tipp

To implement least privilege access policies, create fine-tuned IAM roles and, where necessary, configure ACLs for specific objects or accounts. Only provide the exact permissions a role requires and nothing more.

Encrypt data at rest and in transit

Encryption is a crucial component in securing your data in S3. It protects your information, even if someone gains access to your buckets. By encrypting your AWS data both at rest and in transit, you’ll add two layers of protection, making it much harder for attackers to access sensitive data.

You have several options to encrypt data at rest:

S3 automatically encrypts all new objects with server-side encryption using S3-managed keys (SSE-S3). For more control, you can:

  • Use AWS KMS-managed keys (SSE-KMS) for auditability and granular access control.

  • Enable dual-layer encryption with KMS for highly sensitive workloads.

  • Choose customer-provided keys (SSE-C) if you need to manage keys outside AWS.

For every sensitive workload, enable default bucket encryption and apply IAM or bucket policies that reject unencrypted uploads.

Setting default encryption methods for Amazon S3 (Source: AWS)

To encrypt data in transit, do the following:

  • Require all data transfers to use HTTPS (SSL/TLS) by enforcing HTTPS in bucket policies or IAM conditions. This ensures data is encrypted during upload, download, and replication, preventing interception or tampering.

  • Audit your existing bucket policies to block all unencrypted (HTTP) requests.

Profi-Tipp

Enforce policies that require encryption of S3 data both at rest and in transit. For data at rest, use AWS S3 server-side encryption (SSE) with AWS managed keys, customer-managed keys, or customer-provided keys. For data in transit, ensure that your user policies enforce the use of SSL/TLS.

Enable versioning for buckets

Mistakes and accidents happen, so it’s essential to have protection that preserves your data history. Versioning is that safety net. It saves every object change or deletion as a new version of the S3 bucket, allowing you to recover previous object states.

For your applications and services, a versioned bucket works just like a standard S3 bucket. AWS hides the older versions by default, but you can always access or restore them.

The downside of versioning is that it raises storage costs. If your objects are prone to many changes and every change results in a copy, costs can quickly multiply. In this case, consider lifecycle rules and S3 bucket policies that store older versions on cheaper storage tiers.

Setting versioning for Amazon S3 buckets (Source: AWS)
Profi-Tipp

Enable bucket versioning from S3 settings to protect your data from loss or overwrite. To manage costs, use lifecycle rules and S3 bucket policies to store older versions on lower-cost storage tiers.

Enable logging for buckets

Logs are essential for tracking S3 object access. Enabling logging for Amazon S3 gives you detailed access data to spot unusual activity, enforce security policies, and investigate potential exposures.

Many compliance standards require logging S3 buckets to link data access to specific users. For example, insider trading laws require organizations to track who accesses sensitive information to help them identify misuse. Server access logs give you this visibility and a valuable audit trail.

To protect your logs, store them in a separate, secured bucket with strict access controls via ACLs. Use lifecycle policies to manage log retention and avoid excessive S3 storage costs. You can also use observability tools like AWS CloudTrail and CloudWatch to collect, analyze, and monitor logs in real time. These tools help you detect and respond quickly to unusual activity by setting up alerts based on specific S3 API events or access patterns.

Enabling logging for Amazon S3 buckets (Source: AWS)
Profi-Tipp

Enable logging in your S3 bucket settings to monitor data access patterns and security anomalies using tools like AWS CloudTrail and CloudWatch. Store log data in separate, secure buckets to protect sensitive information, and use lifecycle rules and S3 policies to move older log data to cheaper storage.

Block public buckets at the account or organization level

Blocking public access to S3 buckets across your entire AWS account or organization helps you enforce strict, consistent access rules. The AWS Block Public Access feature overrides all other bucket policies or ACL rules, ensuring that no one in your account can create a public bucket. Instead, operations teams must rely on controls like IAM policies, bucket policies, or ACLs to enable data access.

This centralized control reduces the risk of accidental public exposure, a common cause of data leaks. 

If you manage multiple AWS accounts, you can block public S3 buckets at the organization level using AWS Organizations to ensure consistent security levels across all accounts. For single accounts, you can apply Block Public Access settings at the account level through the AWS Management Console, a command-line interface, or APIs.

Blocking public buckets at the account level (Source: AWS)
Profi-Tipp

Enforce strict access control by blocking public access entirely across your AWS account or organization. For single accounts, you can block public access using the management console or APIs. To restrict public access at the organization level, use AWS Organization.

Taking S3 security further for cloud-native workloads

Modern cloud-native applications often use short-lived services, like containers and serverless functions, to deliver fast and scalable solutions. Securing S3 data from these services requires specialized processes to capture data accurately and protect logs and snapshots from tampering. You can use the following tips to capture data from ephemeral workloads to S3 and increase the security of that data:

Ephemeral workload data capture

Containers, serverless functions (like AWS Lambda), and other ephemeral services are dynamic and have a short lifespan. They often write logs, metrics, or data back to S3 buckets for persistence since their local storage is temporary or nonexistent. The transient nature of these workloads means traditional monitoring and logging methods can miss crucial data or access events related to their operation.

To accurately capture ephemeral workload data, you must implement proper instrumentation methods. Below are some recommended strategies:

  • Sidecar log collectors: Use sidecar agent containers that run alongside ephemeral workload containers and collect logs and metrics in real time, then push that data to S3 buckets.

  • CloudTrail data events: Configure AWS CloudTrail to capture S3 data event logs for tracking object-level activity from ephemeral workloads. These logs provide visibility into who accessed what and when, even for short-lived identities.

  • Additional tools: Use service meshes like Istio or Linkerd for traffic-level telemetry, and deploy container security agents like AWS GuardDuty Runtime Monitoring to collect deeper runtime system-level telemetry.

Immutable log and snapshot management

Logs and configurations provide critical observability into the different states of your cloud-native workloads and their interactions with S3 buckets. However, attackers can tamper with them to hide their trail and data activity. By making your logs and configurations immutable, you ensure hackers can't change or modify log data or configuration changes of ephemeral workloads.

You can use AWS Config to record resource configurations continuously and periodically take snapshots of resource states, including S3 bucket policies. This practice enables operations teams to track and audit bucket-level configuration changes. To prevent tampering with logs and snapshots, enable S3 Object Lock, which enforces write-once-read-many storage. 

A diagram of S3 resource replication into another account

Wiz further strengthens this approach by integrating its audit logs with AWS CloudTrail Lake. This integration captures Wiz-generated audit events, such as configuration changes, connector deployments, and security findings, and stores them immutably in CloudTrail Lake. Immutable monitoring supports forensic analysis and compliance validation by maintaining a reliable and tamper-proof audit trail.

For example, if attackers compromise the configuration or data of one of your Amazon S3 buckets, you can refer to snapshots from AWS Config and restore the original bucket policies. S3 Object Lock ensures that all backed-up S3 resources are protected from modification or deletion and ensures that recovery points maintain their integrity. Meanwhile, Wiz's immutable audit logs provide clear visibility into the restoration process and ensure that the recovery was secure and tamper-free.

Wiz’s solution for enhanced S3 security

Wiz helps organizations secure Amazon S3 by providing complete visibility into buckets and their relationships to other cloud assets. Through deep integration with AWS services, Wiz connects the dots across infrastructure, identities, and code using the Wiz Security Graph

Here’s how Wiz works with AWS:

  • Full visibility and context: Wiz integrates with AWS services like Amazon S3 and CloudTrail to build a relationship map between your S3 buckets and the broader cloud environment. The Wiz Security Graph and Automated Attack Path Analysis (APA) surface risks by analyzing access patterns and configurations. This helps detect misconfigurations or suspicious activity early, before they become real threats.

  • Secure AI and ML workflows: Wiz launched support for Amazon SageMaker, helping teams secure the entire ML lifecycle. That includes monitoring for risks in model deployment and in the S3 buckets used for storing training data and outputs.

  • Remediation recommendations: Once Wiz identifies a risk, it provides specific remediation recommendations via Wiz’s remediation and response capabilities, such as changing a user password or enabling two-factor or multi-factor authentication (MFA) for a resource.

  • Remediation automation: The AI-powered remediation and response capabilities also allow teams to automatically apply common fixes, such as enforcing least privilege access or enabling MFA for S3 resources. Automation helps reduce the time and effort teams invest into keeping S3 buckets secure.

Want to strengthen your S3 security with continuous monitoring, proactive risk detection, and automated remediation? Schedule a demo with Wiz today to see how we can help you protect your S3 resources. Or for an immediate, practical guide, download our Amazon S3 Security Best Practices Cheat Sheet to start protecting your data now.

Agentless Full Stack coverage of your AWS Workloads in minutes

Learn why CISOs at the fastest growing companies choose Wiz to help secure their AWS environments

Informationen darüber, wie Wiz mit Ihren personenbezogenen Daten umgeht, finden Sie in unserer Datenschutzerklärung.


Other security best practices you might be interested in: