Hacker Newsnew | past | comments | ask | show | jobs | submit | Skunkleton's commentslogin

The title is misleading and not in the article. This change is for business/enterprise accounts. Also, these are still credit based. The change is that credits now operate on tokens like the API rather than on messages as they used to.

> Customers on existing Plus, Pro and Enterprise/Edu plans should continue to use the legacy rate card. We’ll migrate you to the new rates in the upcoming weeks.

Nope, they buried the lead a bit but this is coming for _all_ users, even pro/plus subscription plans. So you get chatgpt pro/plus benefits, and then effectively $20/$200 in credits for codex

> effectively $20/$200 in credits for codex

That's not true.

First of all, there's no dollar amount tied to how many credits you get for a subscription.

Second, if you look at the prices for bundles of _extra_ credits and then do some math on the Codex rate card, you'll see that there's no way they would work out to be the same or similar.


> First of all, there's no dollar amount tied to how many credits you get for a subscription.

I don't understand what you mean here; their official comms is:

     Customers on existing Plus, Pro and Enterprise/Edu plans should continue to use the legacy rate card. We’ll migrate you to the new rates in the upcoming weeks.
To me, anyway, that means that GP was exactly right - they'll give the $20 subscriptions $20 worth of credits, and the $200 dollars subscriptions $200 worth of credits. That is what the "New Rates" are!

I think it would be more rational to discount a subscription (standard is about 10% in most industries) vs PAYG and agree in principal with your assertion - they haven't specified what the discount is on credits bought in a subscription plan - but there is no indication that they are going to continue allowing thousands of dollars of credits on a $200/m plan.

My guess would be a 10% (or similar) discount if you buy a subscription.


1. Look at the new rate card for how many credits are used for each category (that's what the discussion is about).

2. Look at some of your typical sessions for token counts and calculate how many credits that would have been.

3. Look at the rates for extra credits (that's the only place credits have a price).

4. See that you are getting more than $200/mo worth of credits where we have evidence of the value of a credit.

If that doesn't clear it up, then I can't help, sorry.


> effectively $20/$200 in credits for codex

So, 1.3ish million tokens for Codex? Following the token limit from here https://openai.com/api/pricing/


The TurboQuant paper is from April 2025. I’m sure the major labs knew about it on, or even before, the day it published. Any impact it had would have been a year ago. Yet I keep seeing these posts and discuss completely ignoring this.

Can we please start talking about this in that context? We already know what TurboQuant will do to DRAM demand. We already know what it will do to context windows. There is no need to speculate. There is no need to panic sell stocks.


It could also be that masters degrees concentrate in fields with lower compensation. Teachers are in high demand, but yet they still tend to have something beyond an undergrad.


I don't think it's odd. Sacrificing deep understanding, and delegating that responsibility to others is risky. In more concrete terms, if your livelihood depends on application development, you have concrete dependencies on platforms, frameworks, compilers, operating systems, and other abstractions that without which you might not be able to perform your job.

Fewer abstractions, deeper understanding, fewer dependencies on others. These concepts show up over and over and not just in software. It's about safety.


My experience is that "asking staff for ideas" does not lead to successful products. Sometimes, sure, but in general it does not.


I've never seen a roadmap planning process that didn't involve some component of asking departments and teams what needs to be done.

To the extent you have successful products, it's because you have product managers and engineers and data scientists and depending on the product, integration/forward deployed staff. These should be the people with a view to how the product needs to meet the needs of future customers, the challenges faced by existing customers, and the technical components needed to get there. I'm not saying you encourage them to just spitball ideas from ignorance, I'm saying you solicit their expertise on the limits and needs of your products, systems, tools, processes, messaging etc.


This depends on your goals. If your goal is to drive efficiency into your processes, drive down tech debt, or fix pain points for customers of your existing products, sure. Most people at a your company with have thoughts, and lots of them will have good ideas.

If your goal is to pivot the company into new verticals, or to develop an entirely new product, then "asking staff for ideas" isn't a likely way to succeed.


I didn't add a why. Here is why.

Most of the staff doesn't have the visibility into the business to understand what may or may not make money. You can have a great idea, even on that could be a successful product, but it could still be a bad fit for the business.


It's not just fork. The operating system overcommits memory all over the place. For example, when you map memory, that can/will succeed without actually mapping physical pages. Even "available" memory is put to some use and freed in an asynchronous way behind the scene, a process that is not always successful.

Honestly, I think overcommit is a good thing. If you want to give a process an isolated address space, then you have to allow that process to lay out memory as it sees fit, without having to worry too much about what else happens to be on the system. If you immediately "charge" the process for this, you will end up nit-picking every process on the system, even though with overcommit you would have been fine.


If you make structural changes to your filesystem without a journal, and you fail mid way, there is a 100% chance your filesystem is not in a known state, and a very good chance it is in a non-self-consistent state that will lead to some interesting surprises down the line.


FAT has two allocation tables, the main one and a backup. So if you shut it off while manipulating the first one you have the backup. You are expected to run a filesystem check after a power failure.


No, it is very well known what will happen: you can get lost cluster chains, which are easily cleaned up. As long as the order of writes is known, there is no problem.


Better hope you didn't have a rename in progress with the old name removed without the new name in place. Or a directory entry written pointing to a FAT chain not yet committed to the FAT.

Yes, soft updates style write ordering can help with some of the issues, but the Linux driver doesn't do that. And some of the issues are essentially unavoidable, requiring a a full fsck on each unclean shutdown.


I don't know how Linux driver updates FAT, but if it doesn't do it the way DOS did, then it's a bug that puts data at risk.

1) Allocate space in FAT#2, 2) Write data in file, 3) Allocate space in FAT#1, 4) Update directory entry (file size), 5) Update free space count.

Rename in FAT is an atomic operation. Overwrite old name with new name in the directory entry, which is just 1 sector write (or 2 if it has a long file name too).


No, the VFAT driver doesn't do anything even slightly resembling that.

In general "what DOS did" doesn't cut for a modern system with page and dentry caches and multiple tasks accessing the filesystem without completely horrible performance. I would be really surprised if Windows handled all those cases right with disk caching enabled.

While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.


> No, the VFAT driver doesn't do anything even slightly resembling that.

Which driver? DOS? FreeDOS? Linux? Did you study any of them?

> While rename can be atomic in some cases, it cannot be in the case of cross directory renames or when the new filename doesn't fit in the existing directory sector.

That's a "move". Yes, you would need to write 2-6 sectors in that case.

For short filenames, the new filename can't not fit the directory cluster, because short file names are fixed 8.3 characters, pre-allocated. A long file name can occupy up to 10 consecutive directory entries out of the 16 fixed entries each directory sector (512B) has. So, an in-place rename of a LFN can write 2 sectors maximum (or 1KB).

Considering that all current drives use 4KB sectors at least (a lot larger if you consider the erase block of a SSD) the rename opearation is still atomic in 99% of cases. Only one physical sector is written.

The most complicated rename operation would be if the LFN needs an extra cluster for the directory, or is shorter and one cluster is freed. In that case, there are usually 2 more 1-sector writes to the FAT tables.

Edit: I corrected some sector vs. cluster confusion.


Yes, that is really all it is.


In the context of the kernel, it’s hard to say when that’s true. It’s very easy to fix some bug that resulted in a kernel crash without considering that it could possibly be part of some complex exploit chain. Basically any bug could be considered a security bug.


plainly, crash = DoS = security issue = CVE.

QED.


BRB, raising a CVE complaining the OOM killer exists.


Memory leaks are usually (accurately) treated as DoS. OoM killer is a mitigation to contain them and not DoS the entire OS.


I could be wrong. But operation by design isn't considered a bug.


It is if some other condition is violated that is more important. Then the design might have to be reconsidered.


If it is faulty, then it's not a bug, it's a flaw.


It is possible to design a security vulnerability.


Oh, now that is an exciting area.


you either get OOMed or next malloc fails and that's also going to wreck havoc


> The spontaneous explosions become so common and normalized that just about everyone knows someone who got caught up in one, a dead friend of a friend, at least

That’s an extraordinary claim.


This is a metaphor; do you think it’s an extraordinary claim to make for traffic accidents or even traffic deaths? To me it isn’t, at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: