Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you are addressing the issue from a developer's perspective. I don't think TPUs are going to be sold to individual users anytime soon. What the article is pointing out is that Google is now able to squeeze significantly more performance per dollar than their peer competitors in the LLM space.

For example, OpenAI has announced trillion-dollar investments in data centers to continue scaling. They need to go through a middle-man (Nvidia), while Google does not, and will be able to use their investment much more efficiently to train and serve their own future models.



> Google is now able to squeeze significantly more performance per dollar than their peer competitors in the LLM space

Performance per dollar doesn't "win" anything though. Performance (as in speed) hardly cracks the top five concerns that most folks have when choosing a model provider, because fast, good models already exist at price points that are acceptable. That might mean slightly better margins for Google, but ultimately isn't going to make them "win"


It's not slightly better margins, we are talking about huge cost reductions on the main expense which is compute. In a context where companies are making trillion dollar investments, it matters a lot.

Also, performance and user choice are definitely impacted by compute. If they ever find a way to replace a job with LLMs, those who can throw more compute at it for a lower price point will win.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: