Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Amazon explicitly recommends naming buckets like "example.com" and "www.example.com" : https://docs.aws.amazon.com/AmazonS3/latest/dev/website-host...

Now, it seems, this is a big problem. V2 resource requests will look like this: https://example.com.s3.amazonaws.com/... or https://www.example.com.s3.amazonaws.com/...

And, of course, this ruins https. Amazon has you covered for * .s3.amazonaws.com, but not for * .* .s3.amazonaws.com or even * .* .* .s3.amazonaws... and so on.

So... I guess I have to rename/move all my buckets now? Ugh.



That's an interesting contradiction to the rest of their docs. Their docs in other place repeatedly state using periods "." will cause issues. https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestri...

e.g.

> The name of the bucket used for Amazon S3 Transfer Acceleration must be DNS-compliant and must not contain periods (".").

and as you mentioned

> When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that don't contain periods. To work around this, use HTTP or write your own certificate verification logic. We recommend that you do not use periods (".") in bucket names when using virtual hosted–style buckets.

AWS Docs have always been a mess of inconsistencies so this isn't a big surprise. I dealt with similar naming issues when setting up third-party CDNs since ideally Edges would cache using a HTTPS connection to Origin. IIRC the fix was to use path-style, but now with the deprecation it'd need a full migration.

Wonder how CloudFront works around it. Maybe it special cases it and uses the S3 protocol instead of HTTP/S.


> So... I guess I have to rename/move all my buckets now? Ugh.

It's worse than that. You can't rename a bucket. You will have to create a new bucket and copy everything over.


It’s not a huge problem thanks to S3 batch

https://aws.amazon.com/blogs/aws/new-amazon-s3-batch-operati...


I hadn't noticed/heard of this new feature.

Hmm, I was going to say something about the _cost_ of getting/putting a large number of objects in order to 'move' them to a new bucket. Does the batch feature affect the pricing, or only the convenience?


In some cases cross-region replication may help too.

Sadly neither batch operations nor replication is free.


> In some cases cross-region replication may help too.

How so? cross-region replication doesn't replicate existing objects, only new ones.


if you contact AWS they can replicate existing


FWIW - I found it fairly trivial to set up CloudFront in front of my buckets [1], so that I can use HTTPS with AWS Cert Mgr (ACM) to serve our s3 sites on https://mydomain.com [2].

I set this up some time ago using our domain name and ACM, and I don't think I will need to change anything in light of this announcement.

1 - https://docs.aws.amazon.com/AmazonS3/latest/dev/website-host...

2 - https://docs.aws.amazon.com/acm/latest/userguide/acm-overvie...


That isn't a solution for every use case. For example, it means you can't use the s3 VPC gateway for those buckets.


How does using cloudfront for a bucket prevent using VPC endpoint for s3? This doesn't make any sense.


I'm not OP, but if you're using a VPC endpoint for S3, a common use case is so you can restrict the S3 bucket to be accessible only from that VPC. That VPC might be where even your on-site internal traffic is coming from, if you send S3-bound traffic that way.

You could still put CloudFront in front of your bucket but CloudFront is a CDN, so now your bucket contents are public. You probably want to access your files through the VPC endpoint.


The point of the VPC endpoint is that you’ve whitelisted the external services and have a special transparent access to S3.

With a CloudFront proxy you’d have to open up access to all of CloudFront’s potential IP addresses to allow the initial request to complete (which would then redirect to S3). Plus the traffic would need to leave your VPC.


I'm not saying using cloudfront prevents you from using VPC endpoints for s3. I'm saying the workaround of using cloudfront doesn't work if you want to use the VPC endpoint for s3.


Care to elaborate? Do you mean S3 VPC Endpoints? Because this could screw many in-VPC Lambdas that need S3.


yes, that is what I mean. If your bucket name contains a dot, you will no longer be able to access it with https with an S3 VPC Endpoint. (using http or going to cloudfront instead of the S3 VPC Endpoint would still work)


Was curious when someone would bring this up. This has been an issue for such a long time and still the docs are so quiet about it.


isn't that domain name style bucket naming only for hosting a static website from an s3 bucket? otherwise, you can name the bucket whatever you want within the rest of the naming rules.


The point of that is solely for doing website hosting with S3 though - where you'll have a CNAME. Why would you name a bucket that way if you're not using it for the website hosting feature?


Not too long ago, we used S3 to serve large amounts of publicly available data in webapps. We had hundreds of buckets with URL style names. Then the TLS fairy came along. Google began punishing websites without HTTPS and browsers prevented HTTPS pages from loading HTTP iframes.

Suddenly we had two options. Use CloudFront with hundreds of SSL certs, at great expense (in time and additional AWS fees), or change the names of all buckets to something without dots.

But aaaaah, S3 doesn't support renaming buckets. And we still had to support legacy applications anf legacy customers. So we ended up duplicating some buckets as needed. Because, you see, S3 also doesn't support having multiple aliases (symlinks) for the same bucket.

Our S3 bills went up by about 50%, but that was a lot cheaper than the CloudFront+HTTPS way.

The cynic in me thinks not having aliases/symlinks in S3 is a deliberate money-grabbing tactic.


It also comes up when working with buckets of others. Right now if you build a service that is supposed to fetch from a user supplied s3 bucket the path access was the safest.

Now one would need to hook the cert validation and ignore dots which can be quite tricky because deeply hidden in an ssl layer.


How does the S3 CLI handle this? Do they hook cert validation? (I assume they must actually validate HTTPS...)


Pretty sure you get a cert error or they still use paths. Boto (what it’s build on) has an open issue for this for a few years now.



You might be POSTing user uploads to uploads.example.com.

https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPO...


This could still use the CNAME trick though no?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: