Hacker Newsnew | past | comments | ask | show | jobs | submit | Alifatisk's commentslogin

Can't believe the applet api has existed till now. I thought they phased it out a long time ago.

That’s sad news, I used plenty of times when working in group projects. It was quite useful when we were all working at home and someone got stuck on something and wanted to discuss it. We quickly jumped on a call and shared the ”code with me” invite. Do they have a alternative to this now? Or are we forced to switch to vscode liveshare in such instances?

The title has space for more description, maybe add "macOS desktop automation with Lua"?

Oh, so I wasn't the only one to lose my pro subscription all of sudden. I thought it was weird because it was supposed to expire this summer, not now.

You cut out something that changed the message entirely

I thought the edit window was 15 minutes, but it seems it is an hour, so edited to restore the "pilot"

The video is 1 and half hour long. It's a whole documentary. Very detailed and well thought out, but too long for me at the moment. I'll see if its possible to get a summary somehow.

I haven't watched the whole thing either, but basically

https://en.wikipedia.org/wiki/DRAM_price_fixing_scandal

+ showing the people responsible have only been promoted in those companies

+ pointing out 3 (Micron, Samsung, SK Hynix) companies at the heart of it now have 95% of market share

+ hypothesis they've doing it again (or rather, have continued doing it "business as usual")


You can use Gemini or NotebookLM to summarize it

I haven't watched up the video but I went way too into the weeds of the ram crisis.

I am not sure what the video suggests. This is my own understanding of the things after I got way too invested in why does OpenAI need all of this ram all of a sudden. (On a random tuesday)

My understanding is TLDR: The stargate project had OpenAI,Oracle,Softbank etc.

Softbank got the money from Japanese bank loan[0] at low interests rates and actually scrambled to find the 20 Billion $ (they commited combined with oracle to around 500 billion $)

(Btw The datacenter thing is being done in a similar fashion by Oracle)

Almost all of that money when given to OpenAI was used/(will be used?) to commit 20% of the Ram supply of the whole world at a more expensive package because these companies just package ram in different order to get "AI ram" and then Micron shuts down the consumer brand (Crucial)

This has now caused Ram prices to spike 5 times the cost in a couple of months back. Also, the inflation is happening in hard drive and just Nand in general.

The largest impacts I can see that is that even companies like google were scrambling to find Ram. I find this to be one of the larger reasons why they might need so much ram all of a sudden. I mean Google and Anthropic were needing Ram but not 20% of it and not committed in such a way and I am not sure if datacenters are even being built for ram to be stored[1]

OpenAI datacenters in Argentina for example is operated by such a shady company that came like 1-2 years ago IIRC. So a 500 Billion $ Project is just picking any random companies ... Yea no, I have the belief that they don't trust it themselves especially when a company is scrambling for money.

All of this does feel very cartel/monopoly-ish to me to push the competitors out of the market or the people running open source models out of the market and another benefit of it for OpenAI all was that we normal everyday people get impacted too and I am sure that when they made such a large decision, they must have internally thought about it but we all know the morality of OpenAI now after the DoD deal.

But I don't think that google and other companies are that impacted by it all it seems as well. Only the average consumer and Hosting providers (Thus seeing OVH,Hetzner raise prices for example). The average AWS/GCP/Azure makes enough money that they might not even raise money for sometime and they'll be fine having another additional benefit that more people worried about increasing prices would go to Microsoft Azure/GCP/AWS even more so.

Edit: Gamers are being pushed out of consoles and everything too and some are saying seeing the cloud connection and AWS coming out and saying that we want Gamers on cloud (paraphrasing) as meaning that its all done to move everything to cloud.

I do believe that this might be only half the story as OpenAI does benefit from everything moving to the cloud (somewhat) but its done even more to prevent competition in the whole genre as well.

I believe that they thought about it and treated it as a plus point but before all and everything, it helped them thought that it can help them maintain their flimsy lead in AI models as more and more catch up by having a more monopolistic lead by stifling competition by rising prices 5 times. Gamers and normal people were just the largest casuality in this crossfire.

I was thinking in the past month when I found all this that damn, OpenAI's morality sucks and they did all of it on purpose

And then they had the department of defence* deal and the whole controversy surrounding it so yeah, that too.

OpenAI doesn't want your benefit. It wants its profit and when these are conflict, OpenAI doesn't care a cent about you, not anymore than the cent that you give it.

[0]: https://www.bloomberg.com/news/articles/2026-03-06/softbank-...

[1]: https://www.shacknews.com/article/148208/oracle-openai-texas...


>Almost all of that money when given to OpenAI was used/(will be used?) to commit 20% of the Ram supply of the whole world at a more expensive package because these companies just package ram in different order to get "AI ram" and then Micron shuts down the consumer brand (Crucial)

>[...]

>All of this does feel very cartel/monopoly-ish to me to push the competitors out of the market or the people running open source models out of the market and another benefit of it for OpenAI

Nothing you described is actually "cartel/monopoly-ish" beyond "big players have more money to splash around". It's fine to go look at that and go "grr, I hate big tech companies", but the claim of "It's not a shortage, it's a cartel." isn't substantiated. The latter implies some sort of malice beyond what could be explained by standard scarcity thinking, eg. "there isn't enough RAM to go around. We need RAM, so let's stock up".


My point suggests that there is enough Ram to go around in an ideal world even with LLM's but its rather that stocking up Ram could give you so much benefit over your enemies within this space and leverage that you have no reason not to.

So it isn't there isn't enough ram to go around (period) but rather an ideology similar to this town ain't big enough for the two of us (OpenAI vs Anthropic/Google/Chinese-Open-Weights-Models)

Atleast that's my understanding of the situation and I can be wrong about it too for what its worth.


Incredible digging. I remember reading comments about the reason the price hike was the Sam Altman secured a deal with the few ram producer in secrecy were they promised to reserve a large portion of their production to OpenAI for the next years (I don't remember how long). Supposedly Sam will just to put them in a warehouse to collect dust.

> Incredible digging

Thanks. I appreciate your kind words, I was thinking of writing some piece/blog about it but procastination is definitely something :) But I am just happy that I finally wrote a comment atleast explaining all/most of my understanding. That's more than fine for me.

> Incredible digging. I remember reading comments about the reason the price hike was the Sam Altman secured a deal with the few ram producer in secrecy were they promised to reserve a large portion of their production to OpenAI for the next years (I don't remember how long). Supposedly Sam will just to put them in a warehouse to collect dust.

I do believe that's gonna be the case as well. Most of the ram is probably not needed currently (thats what I feel like) so its gonna sit on dust, That, or oracle/microsoft will use it within their datacenters as old ram breaks apart to have some more monopoly given their close ties to OpenAI.

Even if OpenAI internally sells them at half the market price to microsoft/oracle, they still technically turn a profit.

I actually felt too conspiratorial thinking about it when I had first discovered it because I was under the previous assumption that OpenAI actually needed the ram myself too. But seeing recent events of OpenAI with Department of Defense, I definitely think that they did this on purpose.


I would say, just post these kind of rants on a substance and X or something, don't LLM format it, just kind of lay it out and fix whatever typos and let loose

it'd be interesting to just hear some thoughts and opinions from someone who has done some research on the topic in a light way vs a huge article/documentary


aside: YT has a AI/summary option (unless creator opts out). Look for the sparkle button. Personally, it gets me 80% there most days.

Couldn't find the summary button anywhere, but, when searching around I found out that you can apparently paste in Youtube links on Google AI studio and summarize it.

That sounds pretty ironic, given the topic.

This is for local models right? I can't use it on, say my glm-5 subscription connected to opencode?

Correct, local models only.

So let me get this straight, OpenAi previously had an issue with LOTS of different models snd versions being available. Then they solved this by introducing GPT-5 which was more like a router that put all these models under the hood so you only had to prompt to GPT-5, and it would route to the best suitable model. This worked great I assume and made the ui for the user comprehensible. But now, they are starting to introduce more of different models again?

We got:

- GPT-5.1

- GPT-5.2 Thinking

- GPT-5.3 (codex)

- GPT-5.3 Instant

- GPT-5.4 Thinking

- GPT-5.4 Pro

Who’s to blame for this ridiculous path they are taking? I’m so glad I am not a Chat user, because this adds so much unnecessary cognitive load.

The good news here is the support for 1M context window, finally it has caught up to Gemini.


The real problem that OpenAI had was that their model naming was completely incomprehensible. 4.5, o3, 4o, 4.1 which is newer than 4.5. It was a complete clusterfuck. The blowback on that issue seems to have led them to misidentify the issue, but nobody was really asking for a single router model. Having a number of sequentially numbered and clearly labelled models is not actually a problem.

Having both o4 and 4o. Really. What the fuck?

There was no o4.

There was o4 mini and 4o mini at least

I just don't understand how this happens. Either there's literally no product management at a cross-product level or there is and they had a meeting where this plan was discussed and someone approved it.

I'm not sure which would be more shocking, especially considering it's a decade old multi-billion dollar company paying top salaries.


There was o4-mini and 4o-mini

> Who’s to blame for this ridiculous path they are taking?

Variability, different pressures and fast progress. What's your concrete idea for how to solve this, without the power of hindsight?

For example, with the codex model: Say you realize at some point in the past that this could be a thing, a model specifically post-trained for coding, which makes coding better, but not other things. What are they supposed to do? Not release it, to satisfy a cleaner naming scheme?

And if then, at a later point, they realize they don't need that distinction anymore, that the technique that went into the separate coding model somehow are obsolete. What option do you have other than dropping the name again?

As someone else pointed out, the previous problems were around very silly naming pattern. This sems about as descriptive as you can get, given what you have.


> I’m so glad I am not a Chat user, because this adds so much unnecessary cognitive load.

Yeah having Auto selected is really destroying my cognitive load...


If you find that auto is doing a good job, your expectations must be so low and you must be so uncritical

I don't use ChatGPT for anything serious it mostly just replaces Google for me

For anything serious I'm using the API directly or working in Claude Code

Did you really create an account just to make this stupid comment?


You can't keep asking for 100b every 6 months if you don't give the impression of progress

I much prefer this, we can choose based on our use-cases, and people who don’t care can still use Auto.

Well, they have older ones of course. But the current options actual users see is "Auto" or "Instant (5.3)" or "Thinking (5.4)". Not that complicated really.

  Who’s to blame for this ridiculous path they are taking? I’m so glad I am not a Chat user, because this adds so much unnecessary cognitive load.
Most people have it on auto select I'm assuming so this is a non issue. They keep older models active likely because some people prefer certain models until they try the new one or they can't completely switch all the compute to the new models at an instance.

i guess you still have the "auto" as an option to route your request

> Then they solved this by introducing GPT-5 which was more like a router that put all these models under the hood so you only had to prompt to GPT-5, and it would route to the best suitable model.

Was this ever explicitly confirmed by OpenAI? I've only ever seen it in the form of a rumor.


It's not a rumor; you can just test it.

Ask the router "What model are you". It will yap on and on about being a GPT-5.3 model (Non-thinking models of OpenAI are insufferable yappers that don't know when to shut up).

Ask it now "What model are you. Think carefully". It concisely replies "GPT-5.4 Thinking".

https://openai.com/index/introducing-gpt-5/

> GPT‑5 is a unified system with a smart, efficient model that answers most questions, a deeper reasoning model (GPT‑5 thinking) for harder problems, and a real‑time router that quickly decides which to use based on conversation type, complexity, tool needs, and your explicit intent (for example, if you say “think hard about this” in the prompt)


Thanks.

5 itself might have solved the problem of having too many different models somewhere in the backend

This is astonishing, can’t believe why its not receiving more attention. I wonder which state is behind it, looks to be too sophisticated for a private entity.

I created my first Outlook account when I was young. Now, 30 years later and its still my primary account. I can't imagine how I would migrate to another email address if Microslop would begin ruining Outlook by forced subscription or something. My digital life is in M$ hands at the moment.

I would start migrating to an email domain that you control. It will come useful at one point or another.

Yep this. I migrated from Gmail to my own domain years ago. It was painful. Weirdly enough, I think the longest holdouts were my parents, who were still sending email to my Gmail account a decade after I stopped using the address.

I moved my email to Fastmail, and I’ve been very happy ever since. But now that I own the domain, moving to a different provider - if I ever need to - would be trivial.


I moved to Fastmail, set it up with Gmail so I received forwarded emails. Years later there’s still a long tail of senders using my Gmail, but I get the emails forwarded, and only actually log in to Gmail every six months or so.

Act now.

- create a new email address somewhere else, preferably with your own domain

- redirect all your emails to your new account

- send an auto reply: "I don't use this email address anymore, and I may not see this email. My new address is XXX"

The third point is a lie that nudges people into updating their addressnook a lot faster. If you just silently redirect they might not even notice. But you can explain in a sentence why you are doing this.

This redirect+auto reply can be left in place forever.


Adding that ideally one would only auto-replay to people in their address book at the time so one is not replying to spammers.

I learned this lesson the hard way with OneDrive.

Now I only use Windows for legacy software that my customers force on me.

Fedora has not just been liberating, but jaw dropping. I actually felt offended that I had wasted so much time on debian-family/ubuntu/mint and windows.


OneDrive was born enshittified.

The concept, way back when, was great. I tried to use it, by a previous name, for replicating / distributing data backups and it always worked great... for a few days, maybe weeks. And then something unrecoverable went wrong, and I had to re-set it up essentially from scratch and it worked great... for a few days, maybe weeks. And then something unrecoverable went wrong.

In the intervening 15+ years, OneDrive has never made my experience of computing better. It has only ever nagged, slowed, and failed. And that was before Microslop went down the x% AI coding path.


I personally like when you open any office doc, do nothing to it before closing and you get the scary warning asking if you want to save your document (to onedrive) implying all is lost if you select no. I am sure millions of tech unsavvy people have been conned into sending their data to Bill Gates.

> I can't imagine how I would migrate to another email address

Imapsync is your best friend for this, as far as syncing the new account with the old one.

https://github.com/imapsync/imapsync

https://imapsync.lamiral.info/


You could start the process now, before the ruin?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: