Home » S3 Security Playbook: No Public Reads, No Surprises

S3 Security Playbook: No Public Reads, No Surprises

S3 no-public-read architecture with Block Public Access and VPC endpoints Guardrails that keep S3 private: account-wide Block Public Access, Object Ownership, TLS-only, SSE, and CloudTrail data events

Public reads on Amazon S3 aren’t just embarrassing they’re how data leaks become headlines. Fortunately, you can lock buckets down fast without breaking apps. This guide gives you a clean, repeatable checklist for small teams: block public access everywhere, disable ACLs, enforce default encryption, and log every object access. Follow the 10 steps, and misconfigurations stop turning into incidents.

What “no public reads” really means

When you say “no public reads,” you mean every object request must be authenticated and authorized. No anonymous GETs. No sloppy bucket policies that open a path “just for testing.” No leftover access points with public permissions. Moreover, you want this outcome at the account level so new buckets inherit safety by default. The steps below deliver exactly that while keeping developer workflows intact.

The 10-Step S3 Hardening Checklist (narrative walk-through)

Turn on S3 Block Public Access at the account and bucket level

Start where the risk starts: set “Block all public access” on the account, then confirm it at each bucket. These guardrails override public ACLs and bucket policies, even if someone tries to open a path later. Because account-level controls apply to future buckets too, you stop tomorrow’s mistakes in advance. (Administrative note: document who can toggle these settings, and monitor for changes.)

Disable ACLs with Object Ownership: Bucket owner enforced

Access control lists are legacy and noisy. With Object Ownership set to “Bucket owner enforced,” S3 ignores ACLs and gives the bucket owner full control of every object. Consequently, cross-account uploads that relied on ACLs will fail unless you redesign permissions using roles or bucket policies. This reduction in complexity pays for itself: fewer knobs, fewer misreads, fewer public surprises.

Require HTTPS and reject plaintext transport

Add a bucket policy condition that denies any request not using TLS. It’s one line of defense that prevents man-in-the-middle risks and blocks tooling that still tries http:// by default. Because your clients already negotiate TLS, this change rarely breaks anything, yet it lifts your baseline immediately.

Enforce default server-side encryption (SSE-S3 or SSE-KMS)

AWS encrypts new S3 objects by default with SSE-S3. Still, set default encryption explicitly on every bucket. For regulated or high-sensitivity data, choose SSE-KMS and lock key usage with tight KMS key policies and grants. As a result, even if someone uploads without specifying encryption, the bucket enforces it. When you switch to SSE-KMS, coordinate key rotation schedules and throttling limits to avoid surprises at scale.

Scope access with least-privilege IAM and strict bucket policies

Stop using wildcards for actions or resources unless you can prove the need. Write allow policies for specific prefixes (e.g., “/appA/”) and exact operations (“s3:GetObject”, “s3:PutObject”) instead of “s3:*”. Deny statements are blunt instruments use them carefully and preferably to enforce global posture (e.g., deny if the request is not TLS or if it originates outside your VPC endpoint). Because clarity beats cleverness, keep policies short and commented.

Keep S3 traffic private with VPC endpoints and endpoint policies

A gateway endpoint for S3 lets private subnets access S3 without an internet gateway or NAT. Combine it with an endpoint policy that restricts which buckets the VPC can reach. In addition, you can use bucket policies that allow access only through your endpoint. This pattern eliminates direct internet paths and makes exfiltration harder, especially when paired with egress controls at the VPC boundary.

Log object access: CloudTrail data events (not just management events)

Many teams enable CloudTrail but forget “data events,” which record per-object API calls such as GetObject and PutObject. Turn them on for sensitive buckets. Although data events cost more, they give you exact visibility into who read what and when. That record is how you prove “no public reads” over time, and it is how you detect misuse quickly.

Continuously scan permissions with IAM Access Analyzer for S3

Access Analyzer shows which buckets are public or shared with external accounts. Treat its findings as tickets: triage, fix, and re-scan. Because drift happens new buckets, new access points, third-party uploads this tool closes the loop. Moreover, share the dashboard with product owners so a non-security person can track their own buckets.

Lock down write paths and ownership rules

Public reads get the attention; public writes ruin integrity. Enforce bucket owner condition keys so uploaded objects are owned by your account. Require the correct encryption context for SSE-KMS uploads. Deny multipart uploads without encryption. For pipelines that ingest from partners, mandate role-based uploads and test how ownership behaves when clients retry or switch regions.

Add operational guardrails: lifecycle, inventory, and break-glass

Security needs ops to hold. Enable S3 Inventory so you can audit encryption status, object counts, and replication state at scale. Add lifecycle policies to expire old versions in test buckets and to transition rarely accessed data. Store a break-glass runbook: who can disable Block Public Access, under what conditions, and how you revert within minutes. Because people make mistakes, practice reversal like you practice restoration.

Step-by-step verification (prove “no public reads”)

First, confirm Block Public Access is on at the account and bucket. Attempt an anonymous curl to a known object and expect a hard denial. Second, open the bucket policy and verify the tls-only condition; repeat the test with http:// and watch it fail. Third, check Object Ownership: Bucket owner enforced. Try to set an ACL and see it rejected. Fourth, review CloudTrail data events for a bucket and confirm you see GetObject and PutObject activity tied to your roles, not to “Anonymous.” Fifth, open Access Analyzer and clear any public or cross-account findings you didn’t intend. Finally, run an in-app test: can your service still read and write through its role when Block Public Access is on and ACLs are off? When all of these hold, public reads are closed without breaking workflows.

Handling common edge cases without reopening the bucket

Sometimes a static website needs public assets. Instead of opening the bucket, use CloudFront with an origin access control so the bucket stays private while the CDN serves the files. Sometimes a vendor needs to drop files into your bucket. Give them a role to assume, set explicit prefixes, and require SSE-KMS with your key. Sometimes data must move between accounts. Use bucket policies with resource-level conditions and test ownership with Object Ownership rules. And when a legacy script depends on ACLs, fix the code. ACLs are gone by design; don’t bring them back.

What to monitor after hardening

Watch for policy edits that disable Block Public Access, new access points, CloudTrail data-event coverage gaps, and KMS key policy changes. Alert on GetObject requests that appear from unexpected principals or from the internet when your intended path is a VPC endpoint. Track Access Analyzer findings to zero. Because guardrails are only as good as their drift control, schedule weekly reviews and require approvals for posture-changing edits.

Business impact: fewer leaks, cleaner audits, calmer ops

This checklist reduces three kinds of risk. First, accidental exposure: new buckets inherit safety by default, and Access Analyzer catches drifts. Second, malicious exfiltration: private networking with endpoint policies, strict IAM, and TLS-only requests frustrate attackers. Third, audit pain: CloudTrail data events and S3 Inventory give you a documentary trail on every object path. As a result, incident response focuses on signals, not guesswork. And because the guardrails are native features, you don’t need new infrastructure to deploy them.

Locking S3 down does not require heroics. Start with Block Public Access, disable ACLs, and require TLS. Then enforce default encryption, apply least-privilege IAM, keep traffic private with VPC endpoints, and log object access with CloudTrail data events. Close the loop with Access Analyzer, inventory, and lifecycle policies. Because each step is a native control, the result is durable: no public reads, fewer mistakes, and a calmer audit every quarter.

FAQs

Is “Block Public Access” enough by itself?
No. It prevents obvious public permissions, but you still need Object Ownership with ACLs disabled, TLS-only policies, least-privilege IAM, and monitoring. Defense-in-depth stops regressions and catches drift.

Should I always choose SSE-KMS over SSE-S3?
Not always. SSE-KMS adds key policies, grants, and throttling limits. Use it when you need key separation, auditability, or customer-managed cryptography. Otherwise, SSE-S3 with clear default encryption is fine.

Do VPC endpoints replace bucket policies?
They complement them. Endpoints keep traffic private, while bucket policies decide who and what can access. Use both for high-value data.

Are CloudTrail data events worth the cost?
For sensitive buckets, yes. You gain per-object visibility that proves who accessed which object and when. If cost is a concern, scope data events to the buckets that matter.

How do I share a bucket across accounts without ACLs?
Use roles and bucket policies. Require the correct encryption and enforce bucket-owner conditions so your account owns the uploaded objects.

2 thoughts on “S3 Security Playbook: No Public Reads, No Surprises

Leave a Reply

Your email address will not be published. Required fields are marked *