Hacker Newsnew | past | comments | ask | show | jobs | submit | phi-go's commentslogin

Why do you think that?

DeepSeek had a theoretical profit margin of 545 % [1] with much inferior GPUs at 1/60th the API price.

Anthropic's Opus 4.6 is a bit bigger, but they'd have to be insanely incompetent to not make a profit on inference.

[1] https://github.com/deepseek-ai/open-infra-index/blob/main/20...


American labs trained in a different way than the Chinese labs. They might be making profit on inference but they are burning money otherwise.

> they'd have to be insanely incompetent to not make a profit on inference.

Are you aware of how many years Amazon didn’t turn a profit?

Not agreeing with the tactic - just…are you aware of it?


Amazon was founded in 1994, went public in 1997 and became profitable in 2001. So Anthropic is two years behind with the IPO but who knows, maybe they'll be profitable by 2028? OpenAI is even more behind schedule.

How much loss did they accumulate until 2001? Pretty sure it wasn't the 44 billion OpenAI has. And Amazon didn't have many direct competitors offering the same services.

Did Amazon really not turn a profit, or apply a bunch of tricks to make it appear like they didn't in order to avoid taxes? Given their history, I'd assume the later: https://en.wikipedia.org/wiki/Amazon_tax_avoidance

Anyway, this has nothing to do with whether inference is profitable.


It has everything to do with whether they make profit on paper,

vs give away the farm via free tier accounts, free trials,

and last but not least: subsidized compute to hook entire organizations………


Deepseek lies about costs systematically. This is just another fantasy.

What do you base your accusations on? Is there a specific number from the link above that you claim is a lie?

And how are 7 other providers able to offer DeepSeek API access at roughly the same price as DeepSeek?

https://openrouter.ai/deepseek/deepseek-v3.2


Their price is not a signal of their costs, it is the result of competitive pressure. This shouldn't be so hard to understand. Companies have burned investor money for market share for quite some time in our world.

This is the expected, the normal, why are you so defensive?


> why are you so defensive?

Because you made stuff up, did not show any proof, and ignored my proof to the contrary.

You made the claim:

    > Deepseek lies about costs systematically.
DeepSeek broke down their cost in great detail, yet you simply called it "lies", but did not even mention which specific number of theirs you claim is a lie, so your statement is difficult to falsify. You also ignored my request for clarification.

You’re citing deepseek unaudited numbers. This is not even close to a proof. Unless proven otherwise it is propaganda. Meanwhile we have several industry experts pointing not only towards DeepSeek ridiculous claims of efficiency, but also the lies from other labs.

Again, your claims are impossible to verify or falsify, because they are too unspecific.

> Meanwhile we have several industry experts pointing not only towards DeepSeek ridiculous claims of efficiency, but also the lies from other labs.

What are those "industry experts" saying that is made up and what is their basis for that?

> You’re citing deepseek unaudited numbers.

Which specific number are you claiming to be fake?

I could just guess blindly and find alternative sources for random numbers from DeepSeek's article.

For example, the tokens-per-second efficiency can also be calculated based on the 30k tps from this NVIDIA article: https://developer.nvidia.com/blog/nvidia-blackwell-delivers-...

But looking for other sources is a waste of my time, when you could just be more precise.


Because if you don't then current valuations are a bublle propped inflated by burning a mountain of cash.

That's not how valuations work. A company's valuation is typically based on an NPV (net present value) calculation, which is a power series of its time-discounted future cash flows. Depending on the company's strategy, it's often rational for it to not be profitable for quite a long while, as long as it can give investors the expectation of significant profitability down the line.

Having said that, I do think that there is an investment bubble in AI, but am just arguing that you're not looking at the right signal.


And that's OpenAI's biz model? :)

This sounds wrong, features have to be the value of your code. The required maintenance and slow down to build more features (technical debt) are the liability, which is how I understood the relationship to "lines of code" anyway.


Wrong or not, the industry embraced it.

I can sort of understand it if I squint: every feature is a maintenance burden, and a risk of looking bad in front of users when you break or remove it, even if those users didn't use this feature. It's really a burden to be avoided when the point of your product is to grow its user base, not to actually be useful. Which explains why even Fischer-Price toys look more feature-ful and ergonomic than most new software products.


Crux seems interesting to share app logic between platforms but I don't see how it helps actually render something. Don't you still need a gui framework that supports android or ios?


Having spent time around cross platform rollouts and development I think something like Crux is the best approach. Building a complete UI framework to rival what iOS and Android provide natively is a monumental task.


Yes (from the README)


This is very neat. I'll have to try it out to say more but thanks for open sourcing it.


You're welcome. I hope it's helpful!


Do you know SkedPal?


I don't think babies can swim but they know not to try and breathe in water. Which is probably what you meant.


I think it's called "diving reflex", not very sure about it all but AFAIK babies can learn to swim properly quite early which makes me think that humans too come with a lot of "ready to go" features but maybe need some prompting to surface


Kids (and even adults) definitely don't know how to swim off the bat though I have no doubt they could be taught earlier than many are. There's a reason some universities have a requirement to take swimming physical education absent a demonstrate ability to swim.


You can also forbid unwraps as part of clippy.


With peer review you do not even have a choice as to which reviewers to trust as it is all homogenized by acceptance or not. This is marginally better if reviews are published.

That is to say I also think it would be worthwhile to try.


There are numbers on how many tries it took. I would also find the individual prompts and images interesting.


But they do not get payed the same. Let's say Spotify has two users that total 100 listens, 99 Taylor listens and one listen for the obscur artist.

If payed by total listens Taylor gets 99%.

Now if those 99 listens are from the one user and the other from the other user. Paying by listens ratio per user, Taylor will get 50%.


That only matters if as a whole Taylor swift listeners listen to Spotify for longer than obscure artist listeners. Which hasn’t been shown.

Over the entire population of Spotify listeners you are going to have high and low listen count users that average out.


No, it was shown this doesn't average out, there is a study linked upthread.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: