Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But there's no moat around these models, they're all interchangeable and leapfrogging each other at a decent pace.

Gemini could get much better tomorrow and their entire customer base could switch without issue.





I think Claude Code is the moat (though I definitely recognize it's a pretty shallow moat). I don't want to switch to Codex or whatever the Gemini CLI is, I like Claude Code and I've gotten used to how it works.

Again, I know that's a shallow moat - agents just aren't that complex from a pure code perspective, and there are already tools that you can use to proxy Claude Code's requests out to different models. But at least in my own experience there is a definite stickiness to Claude that I probably won't bother to overcome if your model is 1.1x better. I pay for Google Business or whatever it's called primarily to maintain my vanity email and I get some level of Gemini usage for free, and I barely touch it, even though I'm hearing good things about it.

(If anything I'm convincing myself to give Gemini a closer look, but I don't think that undermines my overarching (though slightly soft) point).


I went from:

  1. using Claude Code exclusively (back when it really was on another level from the competition) to

  2. switching back and forth with CC using the Z.ai GLM 4.6 backend (very close to a drop-in replacement these days) due to CC massively cutting down the quota on the Claude Pro plan to

  3. now primarily using OpenCode with the Claude Code backend, or Sonnet 4.5 Github Copilot backend, or Z.ai GLM 4.6 backend (in that order of priority)
OpenCode is so much faster than CC even when using Claude Sonnet as the model (at least on the cheap Claude Pro plan, can't speak for Max). But it can't be entirely due to the Claude plan rate limiting because it's way faster than CC even when using Claude Code itself as the backend in OC.

I became so ridiculously sick of waiting around for CC just to like move a text field or something, it was like watching paint dry. OpenCode isn't perfect but very close these days and as previously stated, crazy fast in comparison to CC.

Now that I'm no longer afraid of losing the unique value proposition of CC my brand loyalty to Anthropic is incredibly tenuous, if they cut rate limits again or hurt my experience in the slightest way again it will be an insta-cancel.

So the market situation is much different than the early days of CC as a cutting edge novel tool, and relying on that first mover status forever is increasingly untenable in my opinion. The competition has had a long time to catch up and both the proprietary options like Codex and open source model-agnostic FOSS tools are in a very strong position now (except Gemini CLI is still frustrating to use as much as I wish it wasn't, hopefully Google will fix the weird looping and other bugs ... eventually, because I really do like Gemini 3 and pay for it already via AI Pro plan).


You've convinced me to give OpenCode a try!

Google Code assist is pretty good. I had it create a pretty comprehensive inventory tracking app within the quota that you get with the $25 google plan.

And, if your revenue is $1B but your costs are $2B it only lasts until the music stops....

I don’t think they are losing money on inference.

Model training, sure. But that will slow down at some point.


Why do you think so? Seems like the space is fiercely competitive, I would expect it to get more expensive.

How many times are people going to repeat this lazy statement?

If claude code's revenue grows faster than cost, it will become profitable.


"If claude code's revenue grows faster than cost, it will become profitable."

No shit?


What was the moat in search?

Google had PageRank, which gave them much better quality results (and they got users to stick with them by offering lots of free services (like gmail) that were better quality than existing paid services). The difference was night and day compared to the best other search engines at the time (WebCrawler was my goto, then sometimes AltaVista). The quality difference between "foundation" models is nil. Even the huge models they run in datacenters are hardly better than local models you can run on a machine 64gb+ ram (though faster of course). As Google grew it got better and better at giving you good results and fighting spam, while other search engines drowned in spam and were completely ruined by SEO.

PageRank wasn't that much better. It was better and the word spread. Google also had a very clean UI at a time where websites like Excite and Yahoo had super bloated pages.

That was the differentiation. What makes you think AI companies can't find moats similar to Google's? The right UX, the right model and a winner can race past everyone.


> PageRank wasn't that much better

It really was!

I remember the pre-Google days when AltaVista was the best search engine, just doing keyword matching, and of course you would therefore have to wade through pages of results to hopefully find something of interest.

Google was like night & day. PageRank meant that typically the most useful results would be on the first page.


PageRank, everything before PageRank was more like yellow pages than a search engine as we know it today. Google also had a patent on it, so it's not like other people could simply copy it.

Google was also way more minimal (and therefore faster on slow connections) and it raised enough money to operate without ads for years (while its competitors were filled with them).

Not really comparable to today, when you have 3-4 products which are pretty much identical, all operating under a huge loss.


The sheer amount of data and infrastructure Google has relative to their competitors.

Just having far more user search queries and click data gives them a huge advantage.


Google is in a two sided market. Their moat in search is their ads market share, their moat in ads is their search market share.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: