Hacker Newsnew | past | comments | ask | show | jobs | submit | thereitgoes456's commentslogin

I see you're treating Sam Altman as some kind of trustworthy source. Might it be possible that he's making that up -- of course, nobody will ever call him on it! -- and exaggerating the numbers to make his company and team look really good and ethical for not accepting such lucrative offers, or perhaps to make them sour on Meta for not receiving $100M offers?

Reporting at the time said Wang was far from Zuck’s top choice. Murati and Ilya (among others) were all asked first and said no.

What a baffling comment. Aren’t you aware of why this exodus is happening? (It’s not related to “value for the money”!) What are your feelings on that part?


It is entirely okay to weigh the Department of War thing against other criteria when choosing a service.


Agreed, but the comment should mention it. Nobody is talking about value for money right now.

I didn't mean to advocate for Anthropic, apologies.


Whatever Anthropic might or might not do with the department of war interests me in proportion to how much I can influence this. Rounded, speaking as an European citizen, that appears to be exactly 0 to me.


If anyone thinks Anthropic or OpenAI are the "good guys," they've already lost the plot. If you look at additional reporting on the topic, not just the Anthropic PR spin, the disagreements were much more nuanced than it was portrayed by Anthropic. They aren't exactly a reliable narrator on the topic either. In fact it actually just seems like Amodei fumbled the deal and crashed out a bit. He's already walked back his internal memo, and is reportedly still seeking a deal with the Pentagon. I don't trust either CEO, I use their products, but if you're even leaning 51-49 on who is "less evil," I think you're giving too much slack.


Then the people mad about "mass surveillance" recommend Gemini or whatever.

They're just keeping up with the outrage news cycles.


ever tried living while simultaneously deciding to only patron groups that strictly morally and ethically align to your own personal beliefs?

I would love to, but a practical look at that concept seems practically impossible.

My .02c : Claude was already involved in underhanded shit I don't want a part of[0] and that generated little ethical response from Anthropic , i've had better luck as a 200/mo tier customer with ChatGPT, and I don't really think that Dario claiming that their newest LLM is conscious[1] on a market schedule is all that ethical, either.

[0]: https://en.wikipedia.org/wiki/Project_Maven [1]: https://tech.yahoo.com/ai/claude/articles/anthropic-ceo-admi...


Why paint the choice as black and white? Most people are doing the best they can morally even if they don't get it 100% right. Even living 60% in accordance with your values is better than 50%. Likewise, bucketing organizations as good or bad misses the same nuance. Choosing something that is slightly better is has positive consequences despite it not being 100% good.


not the poster, but I guess thats kinda american thinking that actually believes voting with your wallet will make any difference in this late stage crony capitalism in a post-facts world.

realistically: AI WILL get used in military and for killing autonomously, like it or not, believe it or not. I am also against that in principle but I do accept the fact my opinion just doesn't matter and practice radial acceptance or reality as-is. twitter/X is also alive and kicking, despite musk and anti-musk-hate. xAI/Grok is genuinely really good too compared to OAI/Claude, a bit different but very good. At this point all the "outcries" feel like noise I just skip on principle. But it could turn up the fire under the OAI team to go aggressive feature/pricing wise in order to retain/increase their userbase again, which is ... good, after all.


My guess would be ~$30mm. MyFitnessPal itself, which generates the most revenue of all health&fitness apps, was sold as an undesirable asset for $345 million a few years ago and probably do not have mountains of cash sitting around.

Cal AI are popular and their revenue seems great, but their profit margins are probably quite slim, they rely heavily on advertising and I imagine 80%+ of their revenue is on day 1.


Sam (and Greg Brockman) want something they do not have, very desperately. They want to win, to be Great Men, to be remembered by history with Jobs and Gates and the other tech luminaries. This is mentioned in Karen Hao's Empire of AI.

They are both a lesson to me that no matter how much you have, you will not necessarily be satisfied.


The President is crashing out on X because a company didn’t do what they wanted. “Forcing” is not a binary. Do you seriously believe that the government’s behavior here is acceptable and has no chilling effect on future companies?


I credit them for acknowledging their limitations and not actively trying to be misleading. Unlike a certain other company in the space.


What an untalented writer. His prose is clunky and every paragraph drips with sanctimony and reaching generalizations.


Well the writer has had a number of books published which appear to have been successful. So he has found a market and delivered completed work.


So it's okay he's a bad writer? Sales above all?


It's more completing book length works that are published by a publishing house. Therefore subjected to content editing, copy editing and finding an audience. Quite a complex multi-stage process.

What criteria are you using to evaluate writing? Can you point to an example that you think is good writing?

(I say all this as someone who is still working on stringing words into sentences)


Understandable worry, but it's not surface-level at all. Karen Hao is a great journalist. Highly recommend.


They also have an image model that’s fallen behind, a coding model that’s fallen behind, a good video model, a social AI slop feed powered by that model, and an upcoming erotica mode(l)


Sex robot… classy stuff @sama.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: