And, of course, this ruins https. Amazon has you covered for * .s3.amazonaws.com, but not for * .* .s3.amazonaws.com or even * .* .* .s3.amazonaws... and so on.
So... I guess I have to rename/move all my buckets now? Ugh.
> The name of the bucket used for Amazon S3 Transfer Acceleration must be DNS-compliant and must not contain periods (".").
and as you mentioned
> When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that don't contain periods. To work around this, use HTTP or write your own certificate verification logic. We recommend that you do not use periods (".") in bucket names when using virtual hosted–style buckets.
AWS Docs have always been a mess of inconsistencies so this isn't a big surprise. I dealt with similar naming issues when setting up third-party CDNs since ideally Edges would cache using a HTTPS connection to Origin. IIRC the fix was to use path-style, but now with the deprecation it'd need a full migration.
Wonder how CloudFront works around it. Maybe it special cases it and uses the S3 protocol instead of HTTP/S.
Hmm, I was going to say something about the _cost_ of getting/putting a large number of objects in order to 'move' them to a new bucket. Does the batch feature affect the pricing, or only the convenience?
FWIW - I found it fairly trivial to set up CloudFront in front of my buckets [1], so that I can use HTTPS with AWS Cert Mgr (ACM) to serve our s3 sites on https://mydomain.com [2].
I set this up some time ago using our domain name and ACM, and I don't think I will need to change anything in light of this announcement.
I'm not OP, but if you're using a VPC endpoint for S3, a common use case is so you can restrict the S3 bucket to be accessible only from that VPC. That VPC might be where even your on-site internal traffic is coming from, if you send S3-bound traffic that way.
You could still put CloudFront in front of your bucket but CloudFront is a CDN, so now your bucket contents are public. You probably want to access your files through the VPC endpoint.
The point of the VPC endpoint is that you’ve whitelisted the external services and have a special transparent access to S3.
With a CloudFront proxy you’d have to open up access to all of CloudFront’s potential IP addresses to allow the initial request to complete (which would then redirect to S3). Plus the traffic would need to leave your VPC.
I'm not saying using cloudfront prevents you from using VPC endpoints for s3. I'm saying the workaround of using cloudfront doesn't work if you want to use the VPC endpoint for s3.
yes, that is what I mean. If your bucket name contains a dot, you will no longer be able to access it with https with an S3 VPC Endpoint. (using http or going to cloudfront instead of the S3 VPC Endpoint would still work)
isn't that domain name style bucket naming only for hosting a static website from an s3 bucket? otherwise, you can name the bucket whatever you want within the rest of the naming rules.
The point of that is solely for doing website hosting with S3 though - where you'll have a CNAME. Why would you name a bucket that way if you're not using it for the website hosting feature?
Not too long ago, we used S3 to serve large amounts of publicly available data in webapps. We had hundreds of buckets with URL style names. Then the TLS fairy came along. Google began punishing websites without HTTPS and browsers prevented HTTPS pages from loading HTTP iframes.
Suddenly we had two options. Use CloudFront with hundreds of SSL certs, at great expense (in time and additional AWS fees), or change the names of all buckets to something without dots.
But aaaaah, S3 doesn't support renaming buckets. And we still had to support legacy applications anf legacy customers. So we ended up duplicating some buckets as needed. Because, you see, S3 also doesn't support having multiple aliases (symlinks) for the same bucket.
Our S3 bills went up by about 50%, but that was a lot cheaper than the CloudFront+HTTPS way.
The cynic in me thinks not having aliases/symlinks in S3 is a deliberate money-grabbing tactic.
It also comes up when working with buckets of others. Right now if you build a service that is supposed to fetch from a user supplied s3 bucket the path access was the safest.
Now one would need to hook the cert validation and ignore dots which can be quite tricky because deeply hidden in an ssl layer.
Now, it seems, this is a big problem. V2 resource requests will look like this: https://example.com.s3.amazonaws.com/... or https://www.example.com.s3.amazonaws.com/...
And, of course, this ruins https. Amazon has you covered for * .s3.amazonaws.com, but not for * .* .s3.amazonaws.com or even * .* .* .s3.amazonaws... and so on.
So... I guess I have to rename/move all my buckets now? Ugh.