Is there a more benign explanation for these things? Altman is undeniably famously cagey and political but despite most of the tech and non-tech worlds at this point seeing him as some kind of con artist, I still kind of want to try to believe he's not.
No doubt some of OpenAI's founding principles like "stop + assist if a competitor gets to AGI first" are likely flying out the window, perhaps in part due to him and also as one might anticipate of initial lofty ideals and promises, but even with the recent New Yorker and other articles he seems like someone who maybe regularly placates people to avoid personal problems and lies to get out of trouble rather than a Machiavellian tech baron.
> he seems like someone who maybe regularly placates people to avoid personal problems and lies to get out of trouble rather than a Machiavellian tech baron.
This would be more plausible were it not for the staggering amount of wealth he’s amassed through those lies.
I mean what if he's actually the second coming of Christ. We can make up "what if"s all day but it's meaningless to even discuss them if you don't have a shred of evidence to support the claim.
So much of the AI Hype is religion encoded. It's relevant because the AI companies are invoking the ideas. If you go around telling people that AI is going to cure cancer, bring about global prosperity, and give you an uploaded immortality then you cannot be surprised when some people start thinking of it similarly to the second coming.
I'm consistently amused by the fact that there's still this weird faction of populists on even tech-oriented sites like HN and /r/programming and lobste.rs and Mastodon who have this almost antivax-level stance on AI. I'm not precisely sure what explains it, because many of them actually are smart people and good programmers.
AI very likely will cure many cancers and very possibly (assuming combined with good politicians) will bring about global prosperity. A high percentage of AI company employees and executives and open source developers and researchers sincerely believe it will and so they say they believe it will. They have good reason to believe it, and they will likely be proven correct. If 400 (or 40, or 4) years pass and it's still mostly just creating spreadsheets, I will concede, though.
> I mean what if he's actually the second coming of Christ.
Makes sense. Cue Don LaFontaine: In a world, where one man sacrificed himself for all of humanity… And they learned nothing of his lessons… In a country where people lie in his name as an excuse to hate their fellow man… Where they mock him by wearing his moment of death as jewellery¹… He’s back and adopted a new identity to slowly fuck them all and make the world burn… Johnny W Pussyfoot is Jesus in: The Second Coming.
I could write a giant response to this with dozens of quotes from him and others and various sources but you would just say it's all lies/posturing, so it would not be a good use of my time. I will say that him becoming one of the most prominent funders and promoters of UBI research/experiments 6 years before GPT-3 is probably not a coincidence, though. OpenAI releasing a paper a month ago strongly suggesting the US move towards a more socialist economic system to handle massive economic upheaval is also probably not a coincidence. He obviously founded OpenAI with the primary intent of making AI so that they could make it go well instead of poorly, and it going well means properly addressing mass unemployment, biosecurity risks, some degree of widely distributed access so that the very poorest get meaningful use of the exact same intelligence as what the very richest get to use, etc.
This is what the most midwit milquetoast person in the country would try to do if they were in Sam's shoes and, like Sam, once they had a couple billion dollars dangled in front of them they'd abandon all regard for safety or distribution of wealth or whatever-else-they-thought-they-ought-to-care-about.
Come on… The guy who said he can’t imagine caring for his child without consulting ChatGPT… The guy who said he didn’t know how to make revenue with ChatGPT, and made a “soft promise” to investors they’d somehow achieve AGI then ask it how to make money… The guy who made a cryptocurrency scam that was banned in multiple countries… The guy who everyone around him says he’s a con artist and a sociopath… That guy? Really?
Really. That guy. I'm afraid you're one side of the coin in https://paulgraham.com/fh.html. Paul himself would agree with me on that if he were to read your post.
I’ve read enough Paul Graham to know he’s not someone whose opinion I care about or respect. He’s yet another rich guy tech bro out of touch with normal people who unfortunately has an army of wannabe tech and finance bro shills clinging to his every word like he’s some sort of sage. He’s not. He isn’t smarter than anyone else, he just has a bigger platform. I don’t abide by cults of personality, they’re a major reason everything is shit right now. They’re the fuel that perpetuates online and offline toxicity. Bragging that Paul Graham would agree with you is like bragging Will Smith or Kim Kardashian agrees with you: it’s not a badge of honour even if it’s true, and doesn’t make your argument stronger or mean you’re right.
But since you value his opinion so much, perhaps you should inform yourself of what he has said about Altman, including “Sam had been lying to us all the time”.
Paul Graham, founder of the website we're on, still says he likes and trusts him. He just was annoyed he was constantly distracted by AI stuff when he was supposed to be running YC as president (which bears no resemblance to any current events...).
He, at worst, finds him to have been (at some point) incompetent, which is very different from finding him immoral. Paul keeps replying to tweets to clarify this when people continue to misportray his stance.
"He was accused by the OpenAI board of lying to them, ousted, and somehow managed to regain control." is the only thing you wrote which is plainly true and valid to state.
I am sure there may exist good, strong criticisms, but your argument is so tendentiously gish-gallopy that it will if anything just make people more likely to disbelieve his critics. (Not that I would do that, since that'd be just as fallacious.)
Why would OpenAI employees all still be happily working for him and publicly supporting him? Why is the company still so successful, and the leader? Why wouldn't most of them have left in droves to Anthropic or elsewhere, by now? Especially given most technical employees at OpenAI (justifiably) share the eschatological views of AI shared' by Anthropic staff and other TESCREALists, in which case they really really try to be careful about who will be responsible for potential future superintelligence. The board and some executives disliked and distrusted him but it's unclear many other people there did or do now. And I'm not just talking about the petition but the people who have continued working there for years afterwards.
We’ll see in time if your confident trust in Sam Altman’s good nature is justified.
Personally if someone is found to be untrustworthy by multiple people and does weird stuff (like moving from open non-profit to for profit), I trust them a lot less. I don’t know him so don’t pass judgment but wouldn’t trust him and certainly wouldn’t give his statements credence since he’s been so spectacularly wrong on AI outcomes.
As to people at OpenAI and investors in OpenAI, I certainly wouldn’t expect them to denigrate their CEO just before an IPO, the one who fought off the board and installed his own place men and thus has complete control; it is not in their interests to do so.
If there is a bust after this boom I think quite a lot of bad behaviour and circular deals from
The main players (nvidia, OpenAI, MS etc) will be revealed at that point. In a financial boom a lot gets hidden.
Beware that there exist people who will cut you out of their lives—professional, personal, whatever—completely, likely with no warning, and possibly loudly, publicly, and with-receipts (if they’ve seen this kind of thing before or have thought through what your next steps will be after they cut you out) if they find out you do this.
All it takes is for a few of them to start comparing notes behind your back. Shit goes sideways extremely fast for people pulling this whose victims start talking to each other without them as the intermediary.
The secret is to be able to fail up at a rate higher than you burn the ecosystem around you. You are gone before people notice. That and being funny with enough charisma that it doesn't matter. Sam can't actually operate in this environment, everyone already knows his manipulative schticks.
This is one of the reasons that startups prefer the young, they often haven't been exposed to the grift and the manipulation. As a tech bro sociopath, I'd be wary of joining a startup with a mixture of ages, genders, experiences across the spectrum of ICs and management. They probably have experienced too much to be griftable in the same ways as an org stacked with young ICs. You also want to make sure that there are other people in the management chain that are more emotionally unstable. It takes much of the focus off of ones own pathologies.
I am a beginner to Rust but I've coded with gevent in Python for many years and later moved to Go. Goroutines and gevent greenlets work seamlessly with synchronous code, with no headache. I know there've been tons of blog posts and such saying they're actually far inferior and riskier but I've really never had any issues with them. I am not sure why more languages don't go with a green thread-like approach.
Because they have their own drawbacks. To make them really useful, you need a resizable stack. Something that's a no-go for a runtime-less language like Rust.
You may also need to setup a large stack frame for each C FFI call.
Rust originally came with a green thread library as part of its primary concurrency story but it was removed pre-1.0 because it imposed unacceptable constraints on code that didn’t use it (it’s very much not a zero cost abstraction).
As an Elixir + Erlang developer I agree it’s a great programming model for many applications, it just wasn’t right for the Rust stdlib.
One of Rust's central design goals is to allow zero cost abstractions. Unifying the async model by basically treating all code as being possibly async would make that very challenging, if not impossible. Could be an interesting idea, but not currently tenable.
One problem I have with systems like gevent is that it can make it much harder to look at some code and figure out what execution model it's going to run with. Early Rust actually did have a N:M threading model as part of its runtime, but it was dropped.
I think one thing Rust could do to make async feel less like an MVP is to ship a default executor, much like it has a default allocator.
They could still come in a step short of default executor and establish some standard traits/types that are typical across executors.
By providing a default, I think you're going to paint yourself into a corner. Maybe have one of two opt-in executors in the box... one that is higher resource like tokio and one that is meant for lower resource environments (like embedded).
The attribution is likely incorrect. People have been trying to accuse him for many years, and the evidence is not very strong. This article is the strongest yet, but still commits many stylometric fallacies, and other kinds.
I'm not opposed to LLM-generated code at all, but the such obviously LLM-written README is annoying. The style is so easy to spot. At least try to figure out how to prompt it to not write so obviously like an LLM. (And no, I'm not even referring to the em dashes.)
I agree that was/is an absolutely horrible feature (and it took me way too long to realize I could/should turn it off) and always should've been opt-in, but the current version is honestly quite nice to use. I would not have recommended it a few months ago but I would recommend it now.
I was iffy on it at first but they released an update a week or two ago to make it much more "bring-your-own-coding-agent"-driven, where they facilitate you having lots of tabs with Codex and Claude Code rather than trying to shove theirs down your throat. IMO it's quite a good terminal now, even if they do still have a few other remaining throat-shoving dark patterns in the UI they need to strip out. (The big one being that pressing "+" tries to encourage you to start a new Warp Agent rather than just create a new terminal tab, with no way to change/override it, currently. If they fix that I'd say it'd be stellar.)
The license is the license. I don't know what you expect. I think, to be a good sport, they ought to mention in an About page that they're forked from Alacritty, with a clear link and thank you/appreciation note for the foundation code, but anything beyond that is both unnecessary and should not ever be expected.
(Side note but I find it odd how anti-corporate and anti-AI HN has become starting in the past decade. I am very much not right-wing and frankly I loathe rightists, but I am also very much not a socialist. Though I'm not a libertarian either, to be clear; I just don't have an instinctive revulsion towards corporations who use open source code - or corporations who have more restrictive licenses to prevent this very thing, like Elasticsearch or MongoDB - or towards AI companies for training on public things, or really towards corporations in general. I am perhaps the rare left-leaning corporate shill.)
You don't understand why tech workers are suddenly visibly, even violently, angry at an industry they helped usher into power whose leadership hold deeply anti-human and anti-democratic views?
Honest Q just for you: have you been in a coma for the last 10 years?
I detest the "tech right" very deeply. But most tech execs and employees voted for the Democratic candidate in 2016, 2020, and 2024. Hatred should be directed at the actual people involved. The Andreessens of the world.
I see no evidence the creators of Warp have done anything illiberal or pro-Trump.
I have like 15 concurrent sessions I leave up for weeks, 50% Codex 50% Claude Code, even though I know they work better with fresh context. Then again I also always have least 200 browser tabs up. I probably just have a mental illness.
Can you elaborate? I want a maximalist setup. I like that CC and Codex are maximalist. If I install Pi, I am going to end up using oh-my-pi and installing a trillion plugins to get a Claude Code-like experience (or better/more feature-heavy). Is there any point in me even trying Pi, or should I just stick with Claude Code?
Sorry I'm late but stick with CC. I introduced a coworker to Pi and spent most of the morning feeling like I should apologize for it not doing this or that out of the box.
No doubt some of OpenAI's founding principles like "stop + assist if a competitor gets to AGI first" are likely flying out the window, perhaps in part due to him and also as one might anticipate of initial lofty ideals and promises, but even with the recent New Yorker and other articles he seems like someone who maybe regularly placates people to avoid personal problems and lies to get out of trouble rather than a Machiavellian tech baron.
reply