Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Plus I can run a reasonable LLM on my own hardware, so I don't even need to pay anyone else. And what I can run locally is only going to get better and better.


This is true, but this is also true for on-premise hosting vs cloud. And cloud has been booming for at least a decade before LLMs appeared. I suspect AI will follow a similar trajectory, i.e. companies don't move their AI deployments on-prem until they hit a certain scale.


This is very true, but I think the other point is that AI doesn't have much "moat". If a competitor can take a pre-trained Chinese LLM, fine tune it a bit, fiddle with the prompt, and ship a product which is not as good but way cheaper, then you've (or Oracle's) got a problem.


Actually, in that scenario the AI labs (OpenAI, Anthropic, etc) have a problem. The cloud providers (including Oracle!) will do with the models what they've been doing with open source software: just take it and run it on their infra and charge money for providing it as-a-service.

This is why you're seeing the AI labs now try to build their own data centers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: