Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is cool, but if you extensively use cloud-native components where the bare metal is abstracted from you (eg: noSQL databases, pubsub, storage buckets, cloud functions, etc) it's very rare you're paying a fixed sum, compared to if you're doing stuff the old school way with a virtual machine assigned 2GB of RAM and 2 CPUs where it's more obvious that you're paying a fixed sum.

I ran the tool on one of my projects' terraform files and it came out with a huuuuge list of infrastructure along with summaries like "Monthly cost depends on usage: $0.026 per GiB". But the grand estinated total cost of this entire project was: $0 per month...

That's the tricky bit, you can't really estimate the cost of this stuff without doing napkin maths on usage. I don't really see how you'd be able to improve this situation, either.

I'm glad someone is trying to bring better transparency to cloud costs. I think this would be a cool thing to add in a terraform CI pipeline. For example: you could allow your devs to be more agile when prototyping by allowing them to change terraform in dev without approval, assuming the MR doesn't add more than $XYZ in costs per month.



We need to iterate on the output for usage-based resources, repeating the same "Monthly cost depends on usage" hundreds of times is not great! Maybe summarizing it as "you have 20 Lambda functions, all running on US-east-1, with this pricing" is better?

But you're right that it needs usage data, or models of usage data... For now, the CLI can fetch usage data from the cloud APIs for S3/Lambda/Dynamo and show engineers that functionX was invoked 2M times in the last 30 days: https://www.infracost.io/docs/features/usage_based_resources...


Does that functionality work for GCP too?


Not yet but users are asking for it, feel free to add a list of the resources you think would be useful to here: https://github.com/infracost/infracost/discussions/985#discu...


> I don't really see how you'd be able to improve this situation, either

I have to assume they could model some projections based on your current usage, and give you that? Predicting the future is hard, of course, but if their algorithm is simple & basic enough to understand, it'd surely be quite helpful, no?


Sure, but that only works if you’re modifying an existing piece of infrastructure. If I’m creating a new cloud function they have no idea if it’s getting invoked once per week or once per second.


Ah, good point! That definitely does sound tricky. Might be cool to see some syntax where you encode your estimates, and it sends an async warning of some kind when the estimates are wildly off?

    resource "aws_lambda" "foo" {
      estimated_invocations = {p5: "1 per hour", p95: "100 per minute"}
      # or
      estimated_invocations = {avg: "5 per minute"}
    }
Then you might get a range estimate of some kind, and maybe even an automated pull request with the real numbers if you're way off?


Very cool idea! Some created a prototype with Terragrunt (https://github.com/infracost/infracost/issues/463#issuecomme...). With Infracost parsing HCL directly, we might be able to introduce annotations for estimates -> alerts




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: