I didn't address the comment about how some startups are operating at a loss because it seems like an irrelevant nitpick at my wording that "none of them" is operating inference at a loss. I don't think the comment I was replying to was referring to relying on whatever startups you're talking about. I think they were referring to Google, Anthropic, and OpenAI - and so was I.
That seems like a theme with these replies, nitpicking a minor thing or ignoring the context or both, or I guess more generously I could blame myself of not being more precise with my wording. But sure, you have to buy new GPUs after making a bunch of money burning the ones you have down.
I think your point about knowledge cutoff is interesting, and I don't know what the ongoing cost to keeping a model up to date with world knowledge is. Most of the agents I think about personally don't actually want world knowledge and have to be prompted or fine tuned such that they won't use it. So I think that requirement kind of slipped my mind.
That seems like a theme with these replies, nitpicking a minor thing or ignoring the context or both, or I guess more generously I could blame myself of not being more precise with my wording. But sure, you have to buy new GPUs after making a bunch of money burning the ones you have down.
I think your point about knowledge cutoff is interesting, and I don't know what the ongoing cost to keeping a model up to date with world knowledge is. Most of the agents I think about personally don't actually want world knowledge and have to be prompted or fine tuned such that they won't use it. So I think that requirement kind of slipped my mind.