Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every file uploaded is considered an immutable archive. It does not have a version history. So let's say you have 100,000 files you backed up and want to update them and don't want to pay for the storage of the old files. You need to request for a manifest of hashes for all files. This will take a few days to generate then you will be given a json file that is over a gigabyte. Next, you will write a script to delete each file one at a time, rate limited to one request per second. Have fun.


Are you maybe referring to Glacier "vaults" (the original Glacier API)? With the old Glacier Vault API you had to initiate an "inventory-retrieval" job with an SNS topic etc. It took days. Painful.

But these days you can store objects in an S3 bucket and specify the storage class as "GLACIER" for "S3 Glacier Flexible Retrieval" (or "GLACIER_IR" for S3 Glacier Immediate Retrieval or "DEEP_ARCHIVE" for S3 Glacier Deep Archive). You can use the regular S3 APIs. We haven't seen any rate limiting on this approach.

The only difference from the "online" storage classes like STANDARD, STANDARD_IA, etc is that downloading an object with GLACIER/GLACIER_IR/DEEP_ARCHIVE storage class requires first making it downloadable by calling the S3 "restore" API on it, and then waiting until it's downloadable (1-5 minutes for GLACIER_IR, 3-5 hours for GLACIER, and 5-12 hours for DEEP_ARCHIVE).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: