Hacker Newsnew | past | comments | ask | show | jobs | submit | hinterlands's commentslogin

It is fairly rare to see an ex-employee put a positive spin on their work experience.

I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.

This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!


I would never post any criticism of an employer in public. It can only harm my own career (just as being positive can only help it).

Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!

Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.


Calvin cofounded Segment that had a $3.2B acquisition. He's not your typical employee.


So this guy is filthy rich and yet decided to grind for 14 months with a newborn at home?

I guess that's why he's filthy rich.


I had a chance to join OpenAI 13 months ago too.

But I had a son 14 months ago.

There was absolutely no way I was going to miss any of a critical part in my baby’s life in order to be in an office at 2am managing a bad deployment.

Maybe I gave up my chance at PPU or RSU riches. But I know I chose a different kind of wealth that can never be replaced.


Wow, ditto! I thought I was the only one who took an extended leave to watch their baby grow up. Totally worth it, and it was a wonderful experience being able to focus 100% on her.


My daughter was born in 2020, when my employer was going through big changes and the world around me was obviously in chaos. There were real opportunities to work long days and advance in our new parent company. Instead, I took every day of paternity leave that they'd let me have and tossed in some PTO for good measure. There's nothing like being able to spend all day learning your new baby.


You both 100% made the right choice. The number of apologists for terrible fathers in this thread explains a lot.


Way to go to keep the boring chores of the first months with the partner and join the fun when the little one starts to be more fun after a year. With all that cash, I'm sure they could buy a bunch of help for the partner too.


I don't know, when I became a parent I was in for the full ride, not to have someone else raising her. Yes, raising includes changing diapers and all that.


You make it sound like your choice is somehow the righteous one. I'm not convinced. What's wrong with a hiring help, as long as it's well selected? And anyway, usually the help would take care of various errands to free up mom so she can focus on her baby. But maybe they have happily involved grandparents. Maybe he was working part-time. Or maybe there's some other factor we're completely missing on right now.


So you sincerely think it’s ok that everybody takes care of the kid but the father because he’s rich and can afford multiple nannies? There’s not much context to miss when TFA has this:

> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.


Does a household necessarily need multiple nannies to raise a baby? Grandparents might be willing to help and if there's some house help as well, no nannies might be needed at all, as long as the wife is happy with the arrangement, which I don't find impossible to entertain. Yeah, wealth allows for more freedom of choice, that's always been the case, but this type of arrangement is not unheard of across social classes.


A billionaire asking the grandparents for help with a newborn instead of spending some dollars for that help? C'mon, have you ever had a newborn?


>>free up no so she can focus on her baby

Their baby, I presume…not just hers.

Literally any excuse for the man to not be involved.


There are certain experiences in life that one needs to go through so you keep grounded to what really matters.


The people who will disagree with this statement would say, full throated, that what really mattered was shipping on time.

Couldn't be me. I do my work, then clock the fuck off, and I don't even have kids. I wasn't put upon this earth to write code or solve bugs, I just do that for the cash.


There is some parenting, then there is good parenting. Most people don't have this option due to finances, but those that do and still avoid it to pick up just easy and nice parts - I don't have much sympathy nor respect for them.

Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.

Plus it certainly helps the kid with bonding, emotional stability and keeps the parent more in touch emotionally with their own kid(s).


> Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.

My favorite is ‘I can’t understand why my kid didn’t turn into a responsible adult!’

Cue look back on what opportunities the parent put them in to learn and practice those skills, over the last 20 years.


Yeah, or, let the partner have the easy period before they are mobile, and when they sleep half the day, and then join the fun when they can walk off into the craft supplies/pantry where sugar/flour/etc. are stored/the workshop with the power tools etc., and when they drop the naptime and instead start waking at 5am and asking you to play Roblox with them.

Either option is priceless :-)


I just went through this period.

I would not describe it as easy.


You do know that early bonding experiences of newborns are crucial for their lifelong development? It reads like satire, or, if serious, plain child maltreatment.


It’s obvious why the HN news community has downvoted this comment but you’re absolutely spot on.

This thread reads like all the excuses for emotional and actual abandonment of a mother and a newborn for man’s little work project.


Pushing it a bit there, aren't we?


“Child abuse or maltreatment constitutes all forms of physical and/or emotional ill-treatment, sexual abuse, neglect or negligent treatment or commercial or other exploitation, resulting in actual or potential harm to the child’s health, survival, development or dignity […] Neglect includes the failure to provide for the development of the child in all spheres: health, education, emotional development, nutrition, shelter and safe living conditions.”

Source: World Health Organization, Child maltreatment, Fact sheet, 2020 https://www.who.int/news-room/fact-sheets/detail/child-maltr...

“The term ‘child abuse and neglect’ means, at a minimum, any recent act or failure to act on the part of a parent or caretaker, which results in death, serious physical or emotional harm, sexual abuse or exploitation, or an act or failure to act which presents an imminent risk of serious harm. This includes emotional neglect such as “extreme or bizarre forms of punishment, deliberate cruelty or rejection, or the failure to provide the necessary psychological nurturing.”

Source: U.S. Department of Health and Human Services, Child Abuse Prevention and Treatment Act (CAPTA) https://acf.gov/cb/law-regulation/child-abuse-prevention-and...

Emotional neglect includes “acts of omission, such as the failure to provide developmentally appropriate affection, attention, or emotional support.”

Source: APSAC, Practice Guidelines: The Investigation and Determination of Suspected Psychological Maltreatment of Children and Adolescents, 2017 https://apsac.org/guidelines


Winston, R., & Chicot, R. (2016). The importance of early bonding on the long-term mental health and resilience of children. London journal of primary care, 8(1), 12–14. https://doi.org/10.1080/17571472.2015.1133012

Brown, G. L., Mangelsdorf, S. C., & Neff, C. (2012). Father involvement, paternal sensitivity, and father-child attachment security in the first 3 years. Journal of family psychology : JFP : journal of the Division of Family Psychology of the American Psychological Association (Division 43), 26(3), 421–430. https://doi.org/10.1037/a0027836

Deneault, A. A., Bakermans-Kranenburg, M. J., Groh, A. M., Fearon, P. R. M., & Madigan, S. (2021). Child-father attachment in early childhood and behavior problems: A meta-analysis. New directions for child and adolescent development, 2021(180), 43–66. https://doi.org/10.1002/cad.20434

Scism, A. R., & Cobb, R. L. (2017). Integrative Review of Factors and Interventions That Influence Early Father-Infant Bonding. Journal of obstetric, gynecologic, and neonatal nursing : JOGNN, 46(2), 163–170. https://doi.org/10.1016/j.jogn.2016.09.004

Jeong, J., Franchett, E. E., Ramos de Oliveira, C. V., Rehmani, K., & Yousafzai, A. K. (2021). Parenting interventions to promote early child development in the first three years of life: A global systematic review and meta-analysis. PLoS medicine, 18(5), e1003602. https://doi.org/10.1371/journal.pmed.1003602

Joas, J., & Möhler, E. (2021). Bonding in Early Infancy Predicts Childrens' Social Competences in Preschool Age. Frontiers in psychiatry, 12, 687535. https://doi.org/10.3389/fpsyt.2021.687535

Thümmler R, Engel E-M, Bartz J. Strengthening Emotional Development and Emotion Regulation in Childhood—As a Key Task in Early Childhood Education. International Journal of Environmental Research and Public Health. 2022; 19(7):3978. https://doi.org/10.3390/ijerph19073978


lots of wealthy families have dysfunctional internal emotional patterns.. A quick stat is that there is more alcoholism among the 1% wealthiest than the general population across the USA


Wow! Wanting to work hard at building cool things == dysfunctional internal emotional pattern

Sums up western workforce attitude and why immigrants continue to crush them


It's unlikely he sees or even perceives what he's doing as a grind, but rather something akin to an exciting and engrossing chase or puzzle. If my mental model of these kind of Silicon Valley types is correct, neither is he likely to be in it for the money, at least not at the narrative self level. He most likely was "feelin' the AGI", in Ilya Sutskever's immortal words. I.e. feeling like this might be a once-in-a-million-years opportunity to birth a new species, if not a deity even.


Which is a YC startup. If you know anything about YC it's the network of founders supporting each other no matter what.


> no matter what

except if you publicly speak in less than glowing terms their leaders


Some books do a good job of documenting the power struggles that happen behind closed doors, big egos backed by millions clashing over ideas and control.

Not gonna lie, the entire article reads more like a puff piece than an honest reflection. Feels like something went down on Slack, some doors got slammed, and this article is just trying to keep them unlocked. Because no matter how rich you are in the Valley, if you're not on good terms with Sam, a lot of doors will close. He's the prodigy son of the Valley, adopted by Bill Gates and Peter Thiel, and secretly admired by Elon Musk. With Paul Graham's help, he spent 10 years building an army of followers by mentoring them and giving them money. Most of them are now millionaires with influence. And now, even the most powerful people in tech and politics need him. Jensen Huang needs his models to sell servers. Trump needs his expertise to upgrade defence systems. I saw him shaking hands with an Arab sheikh the other day. The kind of handshake that says: with your money and my ambition, we can rule the world.


Why that's exactly what we desperately need - more "rule the world" egos!


That's even more of a reason not to bad mouth other billionaires/billion dollar companies. Billionaires and billion dollar companies work together all the time. It's not a massive pool. There is a reason beef between companies and top level execs and billionaires is all rumors and tea-talk until a lawsuit drops out of no where.

You think every billionaire is gonna be unhinged like Musk calling the president a pedo on twitter?


Hebephile or ephebophile rather than pedo to be precise. And we all saw how great friend he was with epstein for decades, frequent visitor to his parties, dancing together and so on. Not really a shocking statement, whether true or not.


He is still manipulatable and driven by incentive like anyone else.


What incentives? It's not a very intellectual opinion to give wild hypotheticals with nothing to go on other than "it's possible".


I am not trying to advance wild hypotheticals, but something about his behavior does not quite feel right to me. Someone who has enough money for multiple lifetimes, working like he's possessed, to launch a product minimally different than those at dozens of other companies, and leaving his wife with all the childcare, then leaving after 14 months and insisting he was not burnt out but without a clear next step, not even, "I want to enjoy raising my child".

His experience at OpenAI feels overly positive and saccharine, with a few shockingly naive comments that others have noted. I think there is obvious incentive. One reason for this is, he may be in burnout, but does not want to admit it. Another is, he is looking to the future: to keep options open for funding and connections if (when) he chooses to found again. He might be lonely and just want others in his life. Or to feel like he's working on something that "matters" in some way that his other company didn't.

I don't know at all what he's actually thinking. But the idea that he is resistant to incentives just because he has had a successful exit seems untrue. I know people who are as rich as he is, and they are not much different than me.


Calvin just worked like this when I was at Segment. He picked what he worked on and worked really intensely at it. People most often burn out because of the lack of agency, not hours worked.

Also, keep in mind that people aren't the same. What seems hard to you might be easy to others, vice versa.


> People most often burn out because of the lack of agency, not hours worked.


Why did Michael Jordan retire 3 times? Sure, you could probably write a book about it, but you would want to get to know the guy first.


first time in 93 because of burnout from three peat, and allegedly a gambling problem. second because of the lockout and krause pushing phil out. third because too old


Not sure if it's genuine insight or just a well-written bit of thoughtful PR.

I don't know if this happens to anyone else, but the more I read about OpenAI, the more I like Meta. And I deleted Facebook years ago.


i know calvin, and he's one of the most authentic people i've worked with in tech. this could not be more off the mark


This reflection seems very unlikely to be authentic because it is full of superlatives and not a single bad thing (or at least not great) is mentioned. Real organizations made of real humans simply are not like this.

The fact that several commenters know the author personally goes some way to explain why the entire comment section seems to have missed the utterly unbalanced nature of the article.


People come out to defend their bosses a lot on this site, convincing themselves they know the powerful people best, that they’re “friends”. How can someone be so confident that a founder is authentic, when a large part of their job is to make you believe so (regardless of whether they are), and the employee’s own self image push them to believe it too?


Some teams are bad, some teams are good.

I've always heard horror stories about Amazon, but when I speak to most people at, or from Amazon, they have great things to say. Some people are just optimists, too.


sounds exactly like a “typical employee”


>This guy even says they scour social media!

Every, and I mean every, technology company scours social media. Amazon has a team that monitors social media posts to make sure employees, their spouses, their friends don’t leak info, for example.


> There's no Bond villain at the helm. It's good people rationalizing things.

I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.


Interesting. A year ago I joined one of the larger online sportsbook/casinos. In terms of talent, employees are all over the map (both good and bad). But I have yet to meet a villain. Everyone here is doing the best they can.


Every villain wants to be the best villain they can be!

More seriously, everyone is the hero of their own story, no matter how obvious their failings are from the outside.

I’ve been burned by empathetically adopting someone’s worldview and only realizing later how messed up and self-serving it was.


I’m sure people working for cigarette companies are doing the best they can too. People can be good individuals and also work toward evil ends.


I am of the opinion that the greatest evils come from the most self-righteous.


That may very well be the case. But I think this is a distinct category of evil; the second one, in which you'll find most of the cigarette and gambling businesses, is that of evil caused by indifference.

"Yes, I agree there are some downsides to our product and there are some people suffering because of that - but no one is forcing them to buy from us, they're people with agency and free will, they can act as adults and choose not to buy. Now what is this talk about feedback loops and systemic effects? It's confusing, go away."

This category is where you'll also find most of the advertising business.

The self-righteous may be the source of the greatest evil by magnitude, but day-to-day, the indifferents make it up in volume.


It's not indifference, it's much more comically evil. Like, they're using software to identify gambling addicts on fixed incomes, to figure out how big retirees' social security checks are, and to ensure they lose the entire thing at the casino each week. They bonus out their marketing team for doing this successfully. They're using software to make sure that when a casino host's patron runs out of money and kills themselves, the casino host is not penalized but rewarded for a job well done.

At 8am every morning, the executives walk across the casino floor on their way to the board room, past the depressed people who have been there gambling by themselves the entire night, seeing their faces, then they go into a boardroom to strategize ways to get them those people to gamble even harder. They brag about it. It's absolute pure villainy.


I wouldn't know if this is a fair characterization of other companies, but it certainly isn't anything like what I observe here. If you can't name names, I'm going to guess you just made this up.


We had a few dozen customers, and "percent of wallet" (figuring out how much money they walk into the casino with vs. how much they leave with) is a standard metric in casino marketing everywhere. You can figure out their paycheck based on them coming the same day of the week and losing the same amount multiple times, and market to then to ensure they lose their whole paycheck more often.

It's trivially easy to spot gambling addicts in the data, and in markets with better protections for gambling addicts they have to approach marketing quite differently. In some places you're allowed to ban yourself from the casino, and it's super illegal for the casino to market to you, so there are tons of protections to prevent all emails, texts, phone calls from hosts, physical mailers, ads of any form from reaching you.

The suicide anecdote is what caused me to quit. I'm ashamed to admit I asked my team to use an "IsDeceased" flag in the calculation for host bonus compensation, for when a patron dies while assigned to them. After that, I tried to transfer to the non-casino corner of the business where they were trying to sell our software to sports stadiums, and when they killed that off a few months later, I left the company. This was circa 2016, at a casino in the rust belt, but I'm not going to get more specific than that.


I appreciate this comment. You will see that the modern day capitalistic system, in general, punishes anyone with even a smidgen of the moral compass you have. The world of finance is this in spades. Having worked on wall street for pretty much my entire adult career, having gone on to found my own fund. I came to an epiphany through a few fucked up experiences that my investors did not give two flying fucks what kind of person I was as long as I was generating solid returns. Moral compass be damned.

So, casino industry perhaps is a convenient pinata when in reality it's not the specific industry, it's the system.


Some people like to smoke. I find it disgusting myself, but as long as people want the experience I see no reason why someone else shouldn't be allowed to sell it to them. See also alcohol, drugs, porn, motorcycles, experimental aircraft, whatever.

We can have all sorts of interesting discussions about how to balance human independence with shared social costs, but it's not inherently "evil" to give consenting adults products and experiences they desire.

IMO, much more evil is caused by busybodies trying to tell other people what's good for them. See: The Drug War.


I disagree. The health burden from smoking is approximately the same as death toll as the sum of all in the Holocaust, but smoking does it every nine months. And 1.3 million/year of those are non-smokers who are dying because they are exposed to second-hand smoke: https://ourworldindata.org/smoking

Even when the self-righteous are at their most dangerous, they have to be self-righteous and in power, e.g.:

  Caedite eos. Novit enim Dominus qui sunt eius.
- https://en.wikipedia.org/wiki/Caedite_eos._Novit_enim_Dominu....

or:

  រក្សាន្នកគ្មានប្រយោជន៍ខាត។ បំផ្លាញអ្នកគ្មានការខាតបង់
- https://km.wikipedia.org/wiki/ប្រជាជនថ្មី


I think y'all are agreeing.


Nah this is lawful evil (I Am Following The Rules Therefore I'm Doing The Right Thing) vs. neutral evil (I Just Work Here).


More like Chaotic Neutral. I like a world full of novel things, and I don't moralize about it.


There are jobs in which one may find oneself where doing them poorly is better for the world than doing them well.

I think you and your colleagues should sit back and take it easy, maybe have a few beers every lunchtime, install some video games on the company PCs, anything you can get away with. Don't get fired (because then you'll be replaced by keen new hires), just do the minimum acceptable and feel good about that karma you're accumulating as a brake on evil.


> We are all very good and kind and not at all evil, trust us if we do say so ourselves

Do these people have even minimal self-awareness?


VGT?


> It is fairly rare to see an ex-employee put a positive spin on their work experience

Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.


Absolutely correct.

There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.

"OpenAI is nothing without it's people"

All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.


Yes, and the reason for that is that employees at OpenAI believed (reasonably) that they were cruising for Google-scale windfall payouts from their equity over a relatively short time horizon, and that Altman and Brockman leaving OpenAI and landing at a well-funded competitor, coupled with OpenAI corporate management that publicly opposed commercialization of their technology, would torpedo those payouts.

I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).


> I also don't believe AGI is a thing

Why not? I don't think we're anywhere close, but there are no physical limitations I can see that prevent AGI.

It's not impossible in the same way our current understanding indicates FTL travel or time travel is.


I also believe that AGI is not a thing, but for different reasons. I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not. People also don't seem interested in justifying why humans would be GI but other animals with 99% of the same DNA aren't.

My main reason for thinking general intelligence is not a thing is similar to how Turing completeness is not a thing. You can conceptualize a Turing machine, but you can't actually build one for real. I think actual general intelligence would require an infinite brain.


> I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not.

That's actually a great point which I'd never heard before. I agree that it's very likely that us humans do not really have GI, but rather only the intelligence that evolved stochastically to better favour our existence and reproduction, with all its positive and negative spandrels[0]. We can call that human intelligence (HI).

However, even if our "general" intelligence is a mirage, surely what most people imagine when they talk about 'AGI' is actually AHI, as in an artificial intelligence that has the same characteristics as human intelligence that in their own hubris they believe is general. Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?

[0] https://en.wikipedia.org/wiki/Spandrel_(biology)


Yes, I do think that people usually mean AHI even they say AGI, although they don't realize it because when asked to define AGI they talk about generality and not about mimicking humans. (Meanwhile, when they talk about sentience and consciousness, they will usually only afford that to an artificial entity if it is exactly like a human, and often not even then.)

> Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?

I wasn't, but I've pondered it since you brought it up. No, I don't think it's impossible to create a greater intelligence than oneself — in fact, evolution has already done it by creating animals, including but not limited to humans. I used to think it was impossible when I pondered science fictional characters like Data from TNG, but modern LLMs show that we can create it without having to understand how it works. Data is depicted as having been engineered, but machine learning is closer to evolution than it is to engineering.


If we were to believe the embodiment theory of intelligence (it’s by far not the only one out there, but very influential and convincing), this means that building an AGI is an equivalent problem to building an artificial human. Not a puppet, not a mock, not “sorta human”, but real, fully embodied human, down to gut bacterial biome, because according to the embodiment theory, this affects intelligence too.

In this formulation, it’s pretty much as impossible as time travel, really.


Sure, if we redefine "AGI" to mean "literally cloning a human biologically", then AGI suddenly is a very different problem (mainly one of ethics, since creating human clones, educating, brainwashing, and forcing them to respond to chat messages ala chatGPT has a couple ethical issues along the way).

I don't see how claiming that intelligence is multi-faceted makes AGI (the A is 'artificial' remember) impossible.

Even if _human_ intelligence requires eating yogurt for your gut biome, that doesn't preclude an artificial copy that's good enough.

Like, a dog is very intelligent, a dog can fetch and shake hands because of years of breeding, training, and maybe from having a certain gut biome. Boston Dynamics did not have to understand a single cell of the dog's stomach lining in order to make dog-robots perfectly capable of fetching and shaking hands.

I get that you're saying "yes, we've fully mapped the neurons of a fruit fly and can accurately simulate and predict how a fruit fly's brain's neurons will activate, and can create statistical analysis of fruit-fly behavior that lets us accurately predict their action for much cheaper even without the brain scan, but human brains are unique in a way where it is impossible to make any sort of simulation or prediction or facsimile that is 'good enough' because you also need to first take some bacteria from one of peter thiel's blood boys and shove it in the computer, and if we don't then we can't even begin to make a facsimile of intelligence". I just don't buy it.


“AGI” isn’t a thing and never will be. It fails even really basic scrutiny. The objective function of a human being is to keep its biological body alive and reproduce. There is no such similar objective on which a ML algorithm can be trained. It’s frankly a stupid idea propagated by people with no meaningful connection to the field and no idea what the fuck they’re talking about.


We will look back on this and the early OpenAI employees (who sold) will speak out in documentaries and movies in a decades time and they will admit that "AGI" was a period of easy dumb money.


The Silenced No More Act" (SB 331), effective January 1, 2022, in California, where OpenAI is based, limits non-disparagement clauses and retribution by employers, likely making that illegal in California, but I am not a lawyer.


Even if it's illegal, you'll have to fight them in court.

OpenAI will certainly punish you for this and most likely make an example out of you, regardless of the outcome.

The goal is corporate punishment, not the rule of the law.


OpenAI never enforced this, removed it, and admitted it was a big mistake. I work at OpenAI and I'm disappointed it happened but am glad they fixed it. It's no longer hanging over anyone's head, so it's probably inaccurate to suggest that Calvin's post is positive because he's trying to protect his equity from being taken. (though of course you could argue that everyone is biased to be positive about companies they own equity in, generally)


> It's no longer hanging over anyone's head,

The tender offer limitations still are, last I heard.

Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)


Nope, happy to report that was also fixed.

(It would be a pretty fake solution if equity cancellation was halted, but equity could still be frozen. Cancelled and frozen are de facto identical until the first dividend payment, which could take decades.)


So OA PPUs can now be sold and transferred without restriction to arbitrary buyers, outside the tender offer windows?


No, that's still the same.


Then how was that "fixed"?


Maybe I misinterpreted "can't sell" - I thought the implication was that even if they said they wouldn't cancel equity outright, they could still exercise power to freeze it out of tender offers, which would have a similar chilling effect. By "fixed" I meant to clarify that not only will they not cancel equity, there's no loophole where they'd specifically freeze it out of participating in tender offers.


> there's no loophole where they'd specifically freeze it out of participating in tender offers.

Again, who said anything about a 'specific loophole'? Needing permission to participate in a tender (which is the only way to sell) is not a 'loophole', and the threat is always there on the table. So again: how was that 'fixed'? Should I interpret your comment as implying that the tender threat of being frozen out is not fixed?

Certainly your fellow OA employee in the other comment doesn't seem to think that it's not on the table, because he is arguing that the threat is fine and harmless because it's never been exercised, which would seem to imply that it's still there...


Also work at OpenAI. Every tender offer has made full payouts to previous employees. Sorry to ruin your witch hunt..


I think the fact that you consider that a defense is a good illustration of why I had to ask that question. ("Yes, the gun is on the table, but the trigger has never been pulled. Sorry to ruin your witch hunt.")


Here's what I think - while Altman was busy trying to convince the public the AGI was coming in the next two weeks, with vague tales that were equaly ominous and utopistic, he (and his fellow leaders) have been extremely busy at trying hard to turn OpenAI into a product company with some killer offerings, and from the article, it seems they were rather good and successful in that.

Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).

Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.


> remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own

Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.

…but of course not everybody likes to go to hackathons


> OpenAI is perhaps the most frighteningly ambitious org I've ever seen.

That kind of ambition feels like the result of Bill Gates pushing Altman to the limit and Altman rising to the challenge. The famous "Gates demo" during the GPT‑2 days comes to mind.

Having said that, the entire article reads more like a puff piece than an honest reflection.


> There's no Bond villain at the helm

We're talking about Sam Altman here, right, the dude behind Worldcoin? A literal bond-villainesque biological data harvesting scheme?


It might be one of the cover stories for a Bond villain, but they have lots of mundane cover stories. Which isn't to say you're wrong, I've learned not to trust my gut in the category (rich business leaders) to which he belongs.

I'd be more worried about the guy who tweeted “If this works, I’m treating myself to a volcano lair. It’s time.” and more recently wore a custom T-shirt that implies he's like Vito Corleone.


> I'd be more worried about the guy

Or you could realize what those guys all have in common and be worried about the systems that enable them because the problem isn't a guy but a system enabling those guys to become everyone's problem.

I don't mind "Vito Corleone" joking about a volcano lair. I mind him interfering in my country's elections and politics. I shouldn't have to worry about the antics of a guy building rockets that explode and cars that can chop off your fingers because I live in a country that can protect me from those things becoming my problem, but because we have the same underlying systems I do have to worry about him because his political power is easily transferrable to any other country including mine.

This would still be true if it were a different guy. Heck, Thiel is landing contracts with his surveillance tech in my country despite the foreign politics of the US making it an obvious national and economic security risk and don't get me started on Bezos - there's plenty of "guys" already.


Sure, but "the systems" were built by such people and are mere evolutions of the previous "I have a bigger stick" of power politics from prior to the industrial revolution.

Not that you're wrong about the systems, just that if it was as easy as changing these systems because we can tell they're bad and allow corruption, the Enlightenment wouldn't have managed to mess up with both Smith and Marx.


I don't think it's accurate to say that anyone messed up with Smith or Marx. Smith didn't anticipate modern finance capitalism and his musings apply perfectly well to earlier iterations of capitalism - though he'd probably have had a stroke if you showed him the finance economy. Marx didn't anticipate capitalism's resilience but he had very little to do with the ideologies built on his work let alone their implementations (or attempts thereof).

That said, "I have a bigger stick" wasn't all we had before the present systems. I'm not a primitivist but I think it's a thought-terminating cliché to just look at what we have now and what came immediately before and decide that the local plateau is the best we can have.

Humanity invests a ton of resources in enforcing the status quo of power dynamics - both through overt force (be it military violence or the mere threat of violence necessary to assert contracts and claims to private property) and through more subtle means (e.g. narrative framing in education, news media and entertainment). Maintaining these systems takes immense resources and effort. But in moments of crisis our cooperative human nature can shine through until order is restored and we are ushered back into learned helplessness and mutual distrust as "the authorities" take over.

The problem isn't that the systems "allow corruption". The systems are inherently bad and corrupting. We build hierarchies of absolute power and then try to come up with solutions for the problems those hierarchies cause in the first place.

The problem isn't who rules. The problem is having rulers. Quoth Bakunin: "the people will feel no better if the stick with which they are being beaten is labelled the 'peoples stick'. [..] not even the reddest republic - can ever give the people what they really want."


There is lots of rationalizing going on in his article.

> I returned early from my paternity leave to help participate in the Codex launch.

10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.


The very fact that he did this exemplifies everything that is wrong about the tech industry and our current society. He's praising himself for this instead of showing remorse for his failure as a parent.


Failure as a parent and a partner. Pregnancy and childbirth is traumatic both physically and emotionally. Basically abandoning your partner to deal with that alone is diabolical.


Odd take. Openai gives 5 months of paternity leave and author is independently wealthy. What difference does it make between spending more time with a 4 month old vs a 4 year old? Or is your prescription that people should just be retiring once they have children?


> It is fairly rare to see an ex-employee put a positive spin on their work experience.

The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.

I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.

Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.

> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.


> The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.

Yeah I had to re-read the sentence.

The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.


> It is fairly rare to see an ex-employee put a positive spin on their work experience.

Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:

"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"


Well, as a reminder OpenAI has a non disparagement clause in their contracts, so the only thing you'll ever see from former employees is positive feedback.


I’m not saying this about OpenAI, because I just don’t know. But Bond villains exist.

Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.


Allow me to propose a different rationalization: "yes I know X might damage some people/society, but it was not me who decided, and I get lots of money to do it, which someone else would do if not me."

I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.


> It is fairly rare to see an ex-employee put a positive spin on their work experience.

FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.


Same here. If I wrote an honest piece about my last employer, it would sound very similar in tone to what was written in this article


> everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions!

The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”


> It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

This is a great insight. But if we think a bit deeper about why that happens, I land on because there is nobody forcing anyone to do the right thing. Our governments and laws are geared more towards preventing people from doing the wrong thing, which of course can only be identified once someone has done the wrong thing and we can see the consequences and prove that it was indeed the wrong thing. Sometimes we fail to even do that.


We already have bad guys doing X right now (literally, not the placeholders variable)


> It is fairly rare to see an ex-employee put a positive spin on their work experience.

I liked my jobs and bosses!


Most posts of the form "Reflections on [Former Employer]" on HN are positive.


I agree with your points here, but I feel the need to address the final bit. This is not aimed personally at you, but at the pattern you described - specifically, at how it's all too often abused:

> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.

This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).

And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.


> That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things

I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.


  > It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
It's also a performance art to acquire attention


> All the 'box office records' since then are the result of charging way more to a continually plummeting audience size.

I don't think that going to the movies has gotten more expensive in real terms. It's just that the records are usually not adjusted for inflation, so a film with the same audience and the same inflation-adjusted admission price will appear to make 80% more at the box office compared to 2002.


In fact... it looks like they've slightly dropped.

https://www.reddit.com/r/boxoffice/comments/14kznfv/movie_ti...


Dropped? You've produced a graph showing they've been on the increase for the past 30 years.


And where the heck can you get a movie ticket for $11? A discount matinee viewing at my local theaters is from $17 to $20. $20-$23 if you go in the evening. The lowest price ticket, a Tuesday noon showing, is $12.

I don't recall the last time I went to the movies with my wife and spent less than $60 (tickets, a shared soda, two snacks).


> And where the heck can you get a movie ticket for $11?

Places where real estate is cheaper than wherever you live.


My local Cinemark has tickets for $5.50, $8.50... you're probably in a premium market.


$11 sounds about right to me. It's an average so some areas will be higher and others lower but $23 sounds awful.


It's about EDTA. It can be legitimately used to treat heavy metal poisoning, plus some other things. Some people (who are probably misguided) want to self-medicate. The FDA won't let you. Hence, drama.


yeah, because unless you legitimately have heavy metal poisoning, the side effects DEFINITELY aren't worth it


Probably, but the process doesn't work that way. The default is that you can't sell medication to people, period. Some pharmaceutical company applied to have a specific form of EDTA approved as a prescription drug, and that was that.

Separately from this, substances that meet the criteria of being "natural" can be sold as supplements as long as you don't claim they cure anything. EDTA is naturally-occurring and you can buy it as a supplement in the US, although the FDA has some beef with this, which I think is what the original remark might be alluding to.

EDTA is also a common food additive and a laboratory reagent, so people who want to use it can buy it easily, which makes the whole debate basically performance art.


So in summary, the FDA prevents you from marketing something as a medicine unless you have gone through the approval process and developed all the regulatory apparatus around a medicine (e.g. packaging, suppliers, prescription guidelines, etc)?


Yes. Look, I'm not arguing this is bad, I'm just trying to respond to the original question and capture the essence of the debate.

There are three pertinent points: (1) it's EDTA; (2) it's not that EDTA is safe or not safe, it's that no one applied to have it approved as an OTC medication; (3) you can still (probably) sell EDTA as a supplement in the US, but the FDA grumbled about it, which angered various chelation cranks.


Iron, copper, zinc, cobalt, manganese and selenium are "heavy metals."


EDTA removes all metals. It's simply a compound that forms water-soluble complexes with metal ions, removing them from the body.

The way idiots kill their children with it is that among other metals, it removes calcium ions, and those are necessary for life, with low enough concentration in blood eventually resulting in cardiac arrest.

So said idiots have an autistic child, read junk online that tells them that "toxins" caused this, find the compound that is legitimately used to remove toxins, and administer enough to end the autism. By stopping their child's heart.

I don't particularly like the FDA, but restricting the availability of EDTA is not something I'd criticize.


If you have such parents, you basically lost the game of life without having a chance to participate much. The only real solution would be to forcibly and permanently take children away from such people, not something I see flying in US if we don't include ie physical abuse or pedophilia.

I feel like a basic human life value has decreased recently. Be it ongoing brutal wars, news pushing doom and gloom 24/7, covid certainly didnt help or something similar. A bit like reversal to medieval times when cruel public executions were a spectacle for whole town and families and life of individual was truly worthless.

If thats the case, let the dumb die including offsprings, just don't let their bills to be picked up by society. Extremely cruel, but it seems we are heading that way, and we have this little thing called overpopulation. Extreme freedom with extreme consequences.


That was my first reaction to this article too: "Ok, gutting the FDA is bad, but the destruction of some other agencies that are not even mentioned in this article actually has worse consequences. If someone believes the quacks, takes Ivermectin or EDTA and dies, that's fine for me - I hope they at least get a Darwin Award for their effort!". But when you think about people doing this to other people (including but not only children) who can't decide for themselves, it gets much more complicated...


Yeah this is one of those situations where people freak out about their neighbor's behavior and try to change who they are with administrative policy. It's really just counter productive.

I think better would be for people to be more personally picky who they share spaces with.


"Heavy metal" in general is a bad term, but especially when used as a proxy for toxin. There is no universal definition of heavy metal and there is no inherent connection to toxicity in any specific organism.

Then again, pretty much every metal is toxic at some relatively low body-mass concentration, even iron (which actually can and does kill people, especially when children eat adult iron supplements).

Even lovely unreactive gold does have compounds that are toxic.


Wow, that's an interesting rabbit hole: https://en.wikipedia.org/wiki/Heavy_metals.

> Even in applications other than toxicity, no widely agreed criterion-based definition of a heavy metal exists. Reviews have recommended that it not be used. Different meanings may be attached to the term, depending on the context.


Not allowing self medication was probably a mistake.


Because what's the headline you're going to get out of it?

If the headline is "Mark Zuckerberg is amassing your data and you know it's for evil", it's an easy sell. If it's "there's an ecosystem of little-known companies that sell transaction, location and lifestyle data to marketers, journalists, PIs, and police departments alike", it's not exactly the kind of a message that spurs people to action. And yeah, the newspaper that would be breaking the news is a customer too.


I think it's fairly common for technologies to get really good just as they're becoming obsolete. Vacuum tubes, CRTs, optical disks, photographic film... in fact, they're often in some respects better than the early generations of the technology that replaces them.

But OLEDs just have too many advantages where it actually matters. Much lower power consumption, physically more compact (no need for backlight layers), etc.


For me, OLEDs fall into a category exemplified by Anton Gudim's "YES, BUT" comic series.

YES, OLEDs consume less power, offer truer color reproduction, and are physically more compact.

BUT, they are prone to CRT-like burn-in.

SSDs, the same thing.

YES, SSDs are much faster and immune to mechanical failure.

BUT, they tend not to last as long as HDDs due to limited write cycles, and their price per GiB is still much higher.


You might add ICE cars to that list. All kinds of cool stuff being developed around small turbocharged engines and other efficiency gains, excellent transmissions, etc.


That slo-mo video is somewhat misleading, though. The phosphor glows for a good while, so there is a reasonable chunk of the image that's visible at any given time.

The problem in that video is that the exact location the beam is hitting is momentarily very bright, so they calibrated the exposure to that and everything else looks really dark.


The phosphor still drops off very quickly [0][1][2], roughly within a millisecond. That’s why you would need a 1000 Hz LCD/OLED screen with really high brightness (and strobing logic) to approximate CRT motion clarity. On a traditional NTSC/PAL CRT, 1 ms is just under 16 lines, but the latest line is already much brighter than the rest. The slow-motion recording showing roughly one line at a time therefore seems accurate.

[0] https://blurbusters.com/wp-content/uploads/2018/01/crt-phosp...

[1] https://www.researchgate.net/figure/Phosphor-persistence-of-...

[2] https://www.researchgate.net/figure/Stimulus-succession-on-C...


I'm not quite sure what you're saying here. My assertion is that a visible image persists on the screen longer than it appears in the slo-mo clip. You can just point a camera with an adjustable shutter speed at a CRT and see it for yourself. Here's an example (might need to copy the URL and open in a new tab, they don't like hotlinking):

https://i.sstatic.net/5K61i.png

The brightly-lit band is the part of the frame scanned by the beam while the shutter was open. The part above is the afterimage, which, while not as bright, is definitely there.


That link shows an error with Access Denied to me. I didn’t deny that an afterimage is there. I meant to point out that the brightest part by far, which what is most prominently perceived by the eye, isn’t much more than one scanline, in SD.


> The part above is the afterimage, which, while not as bright, is definitely there.

Yes it's there, but it's much less bright than the the scanned area, so it will be hardly perceptible relative to the bright part. The receptors in the eye will hardly respond to it after being excited so strongly by the bright part.


I'm not sure about this calculation though. Phosphor decays exponentially with a time constant of roughly 5ms (according to HP [1]). This means when a new frame comes at 60Hz refresh rate there is still 10-15% of the previous frame related excitation is present. This means there is considerable amount of nonlinearity, hence the performance is even worse than 10ms LCD/OLED displays.

Genuine question: why do you think CRTs are better?

[1] https://hpmemoryproject.org/an/pdf/an_115.pdf


That HP reference is from 1970; CRTs did improve over time. The references I gave show that the intensity drops to below 10-15% within about a millisecond. The difference with LCD/OLED displays is that the latter are sample-and-hold, meaning that they show the image at full brightness for the duration of the whole frame. Their pixel response time may be faster than CRT phosphor persistence, but that is less relevant. The problem with LCD/OLED is that they hold the picture for the duration of the frame, which means that a depicted moving object that is supposed to move smoothly during the duration of a frame, is shown as not moving for that duration, which the eye perceives as motion blur. That motion blur is significantly reduced on CRTs, because they show the object only for a fraction of the frame duration at high brightness, as if under a stroboscope, which makes it easier for the eye (or brain) to interpolate the intervening positions of the object.

> Genuine question: why do you think CRTs are better?

CRTs are worse in most aspects than modern displays, but they are better in motion clarity. As to why I think that: I used both in parallel for many years. The experience for moving objects is very different. It is a well-known drawback of sample-and-hold display technologies. And it is supported by the more systematic analyses done by the likes of Blur Busters.


>The problem with LCD/OLED is that they hold the picture for the duration of the frame

Not necessarily. For example on VR headsets the LCD/OLED will only hold the picture for 10% of the frame.


Yeah, they do backlight strobing (LCD) or black frame insertion (OLED), to reduce blurring during smooth eye movements, at the cost of overall screen brightness. I actually think small CRTs would be perfect for VR headsets in this regard, as they are naturally have very short frame persistence.

One likely problem for battery powered headsets is the (I believe) relatively high CRT power draw. Another is probably the fact that they aren't used for anything else anymore, meaning CRT development has stopped a long time ago. There were quite small CRTs in the past for special applications, but probably not as small as is optimal for modern VR headsets. Both for optics and weight and space reasons.


> Genuine question: why do you think CRTs are better?

They have many disadvantages, but an advantage is that CRTs mostly remove the "persistence blur" induced by smooth pursuit eye movements on sample-and-hold displays like LCD and OLED. Here is an explanation:

https://news.ycombinator.com/item?id=42604613


> The phosphor still drops off very quickly [0][1][2], roughly within a millisecond.

It's phosphor chemistry dependent. Different color patches on the same glass would decay at different rates even. But yeah, 1 ms is a good lower bound, although when I last researched this, it was definitely the best case scenario for CRTs. I'm fairly sure the ~500 Hz OLEDs that are already floating around are beating the more typical CRTs of old already.

> That’s why you would need a 1000 Hz LCD/OLED screen with really high brightness (and strobing logic) to approximate CRT motion clarity.

At 1000 Hz you wouldn't need the strobing anymore (I believe?), that's the whole point of going that fast. We're kinda getting there btw! Hopefully with HDMI 2.2 out, we'll see something cool.

> On a traditional NTSC/PAL CRT, 1 ms is just under 16 lines, but the latest line is already much brighter than the rest.

That doesn't really math for me. NTSC would be 480 visible lines at 60 Hz, and so 480 lines / ~16.6 ms = 28.8 lines/ms (6% of the screen). Note that of course PAL works out to the same number: 576 lines / 20 ms = 28.8 lines/ms (just 5% of the screen here though!).


I definitely like my new 240hz 4k oled HDR monitor, though. They're getting there! The data rate it's pushing through the displayport cable for uncompressed 4k HDR is something 80gb/s though. Absolutely mind boggling. Huge upgrade from my 1440p 165hz IPS monitor that had huge amounts of smearing when playing games.


What model is your new monitor?


The ASUS PG27UCDM 26.5" 4K UHD (3840 x 2160) 240Hz Gaming Monitor [0] paired with an RTX 5090 for my home desktop, but I got a USB switcher (for peripherals), and keep it on my standing desk that I plug in my work laptop too with a USB-C to DisplayPort cable. Only 60hz on the work laptop but I really like having a quad monitor setup in a T shape (3 27” monitors and the laptop plugged in with screen open below the central monitor, which is the OLED). It’s great for both productivity and for gaming. I turned off HDR for work, though.

The only annoying thing is every couple hours it asks me to run a 7 minute pixel refresh cycle to avoid burn in, but according to the dashboard I run it every 2.5 hours or so when I go on breaks, so I think I’m good.

Overall the monitor is just fantastic, my LAN party buddies and I dreamed about OLEDs like this back in 2003 and kept saying it was “just around the corner”. The biggest thing is in dark scenes in games there’s absolutely zero noticeable smearing.

[0] https://www.microcenter.com/product/689939/asus-pg27ucdm-265...


I hear OLEDs aren’t ideal for text display (and, ergo, productivity uses). Does this not match your experience with this monitor?


And still it was possible as a side attack, with just looking at the reflected brightness of a screen, to get a perfect image back.


The Chinese medicinal herb you're thinking about is sweet wormwood, from which we isolated artemisinin. Artemisinin isn't a suppressed secret. It's one of the major treatments for malaria and it netted its discoverer a Nobel prize.

The drug the article is talking about contains artemisinin in combination with another substance.

We know how to deal with malaria, so this isn't some story of Big Pharma hiding the truth. The disease used to be endemic in the US and in Europe. Better treatments save lives, but eradication hinges on economic and political factors... which are in turn not helped by malaria.


> "We know how to deal with malaria, so this isn't some story of Big Pharma hiding the truth. The disease used to be endemic in the US and in Europe"

Malaria is still a public health concern in Africa. And that's where Bill Gates does not want to see that plant exported to Africa because, unlike the Big Pharma products, it is a cheap solution.

I do not think we are talking about the same plant because "sweet wormwood" exists in Africa (a variant of it). Unfortunately I can't find the documentary I saw some years ago about this. It was done by few French investigation journalists.


Having lived in several African countries and studied Sino-African relations I can say with certainty, that both Chinese and Africans couldn’t give less of a shit about what Bill Gates wants if there is a product that Chinese people want to sell and Africans want to buy.


No, that's not how trade functions. You can't sell what you want, where you want.

Trade is subject to lobbyism (to say the least)


Sure, but the US or US-based businessmen are hardly the only actors lobbying and if you look at the trade balances of basically all African countries, Chinese interests are far more successful in exploiting African markets than US concerns. Add to this that several African countries have basically no diplomatic relationship and/or are openly hostile to the US and it makes no sense at all that they would forego a cheap and self-sufficient remedy to malaria (if it existed), just to please big Pharma.


TFA is literally about a wormwood related drug being used to treat babies in africa.


Similarly to how most web dev isn't exactly on the frontiers of computer science, a lot of day-to-day PCB design isn't about cutting-edge analog or radio stuff. It's just putting the same MCU or SoC on differently-shaped boards over and over again.

If you can reliably automate that, it's still a pretty big deal.


But this is more copy+paste than automating a design process.


While I think that AI tools can be quite useful for coding, PCB design, and other tasks like that, the setup of this experiment makes it really hard for the LLM to fail.

The author's prompt is basically already a meticulous specification of the PCB, even proactively telling the LLM to avoid certain pitfalls ("GPIO19 and GPIO20 on the ESP32-S3 module are USB D- and D+ respectively. Make sure these nets are labeled correctly so that differential routing works"). If you had no prior experience building that exact thing, writing that spec would be 95% of the work.

Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!


> If you had no prior experience building that exact thing, writing that spec would be 95% of the work.

Nowadays most mainstream LLMs support pre-bundled prompts. GitHub Copilot even made it a major feature and tools like Visual Studio Code have integrated support for prompt files.

https://docs.github.com/en/github-models/use-github-models/s...

Also, LLMs can generate prompt files too. I recommend you set aside 10 minutes of your time to vibe-code a prompt file for PCB generation, and then try to recreate the same project as OP. You'd be surprised.

> Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!

I don't agree. Vibecoding doesn't exactly mean naive approaches to implementations. It just means you enter higher level inputs to generate whatever you're creating.


> Also, LLMs can generate prompt files too.

Sure, but the utility of that for PCB design wasn't demonstrated in the article. This is an expert going out of his way to give the LLM a task it can't fumble (and still does, a bit).


> Sure, but the utility of that for PCB design wasn't demonstrated in the article.

Forget about the article. Try it yourself. Set aside 5 or 10 minutes to ask any LLM of your choice to generate a LLM prompt to generate PCBs. Iterate over your prompt before using it to generate your PCB. See the result for yourself.


Yeah, it's trash. Just as one would expect.


It's the era of jack of no trades, master of all.


It seems that "vibe X" just means "using LLMs" now, regardless of what the original intent of the term was.


I do think it's an SFBA / generational bubble. We have plenty of boring, expensive software projects that someone will always bring up in a HN thread. For example, every time there's a thread on PCB design, you have some folks talking about Cadence. What's there to say about Cadence? Well, first and foremost, it costs a lot. Otherwise, it lets you design PCBs. But there are people here who pay for it, use it, and want to talk about it.


Right but having access to a Cadence license is considered "elite" (it means you are a Real Engineer), while having to use mssql server means you're kind of a schlub (who probably has to work for a real business, that makes money but is super boring, with no equity, among people who don't understand any of this status hierarchy at all).


I work with charities and non-profits, they tend to use Microsoft stack and things like Salesforce due to the large charity discounts and readily available support. I get to work with nice people doing meaningful things.


Sorry, parody probably doesn't come across well. I was trying to ridicule the kind of elitism that causes mssql to be "invisible" in many internet bubbles.


haha sorry it just seemed such a plausible hn comment


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: