Amazon was founded in 1994, went public in 1997 and became profitable in 2001. So Anthropic is two years behind with the IPO but who knows, maybe they'll be profitable by 2028? OpenAI is even more behind schedule.
How much loss did they accumulate until 2001? Pretty sure it wasn't the 44 billion OpenAI has. And Amazon didn't have many direct competitors offering the same services.
Did Amazon really not turn a profit, or apply a bunch of tricks to make it appear like they didn't in order to avoid taxes? Given their history, I'd assume the later: https://en.wikipedia.org/wiki/Amazon_tax_avoidance
Anyway, this has nothing to do with whether inference is profitable.
Their price is not a signal of their costs, it is the result of competitive pressure. This shouldn't be so hard to understand. Companies have burned investor money for market share for quite some time in our world.
This is the expected, the normal, why are you so defensive?
Because you made stuff up, did not show any proof, and ignored my proof to the contrary.
You made the claim:
> Deepseek lies about costs systematically.
DeepSeek broke down their cost in great detail, yet you simply called it "lies", but did not even mention which specific number of theirs you claim is a lie, so your statement is difficult to falsify. You also ignored my request for clarification.
You’re citing deepseek unaudited numbers. This is not even close to a proof.
Unless proven otherwise it is propaganda.
Meanwhile we have several industry experts pointing not only towards DeepSeek ridiculous claims of efficiency, but also the lies from other labs.
That's not how valuations work. A company's valuation is typically based on an NPV (net present value) calculation, which is a power series of its time-discounted future cash flows. Depending on the company's strategy, it's often rational for it to not be profitable for quite a long while, as long as it can give investors the expectation of significant profitability down the line.
Having said that, I do think that there is an investment bubble in AI, but am just arguing that you're not looking at the right signal.
This sounds wrong, features have to be the value of your code. The required maintenance and slow down to build more features (technical debt) are the liability, which is how I understood the relationship to "lines of code" anyway.
I can sort of understand it if I squint: every feature is a maintenance burden, and a risk of looking bad in front of users when you break or remove it, even if those users didn't use this feature. It's really a burden to be avoided when the point of your product is to grow its user base, not to actually be useful. Which explains why even Fischer-Price toys look more feature-ful and ergonomic than most new software products.
Crux seems interesting to share app logic between platforms but I don't see how it helps actually render something. Don't you still need a gui framework that supports android or ios?
Having spent time around cross platform rollouts and development I think something like Crux is the best approach. Building a complete UI framework to rival what iOS and Android provide natively is a monumental task.
I think it's called "diving reflex", not very sure about it all but AFAIK babies can learn to swim properly quite early which makes me think that humans too come with a lot of "ready to go" features but maybe need some prompting to surface
Kids (and even adults) definitely don't know how to swim off the bat though I have no doubt they could be taught earlier than many are. There's a reason some universities have a requirement to take swimming physical education absent a demonstrate ability to swim.
With peer review you do not even have a choice as to which reviewers to trust as it is all homogenized by acceptance or not. This is marginally better if reviews are published.
That is to say I also think it would be worthwhile to try.
reply