"Intermediate users could get around in the hierarchy, but often just barely, and usually saved all of their documents in the default directory for the program they were using."
What a good faith reply. If you sincerely believe this, that's a good insight into how dumb the masses are. Although I would expect a higher quality of reply on HN.
You found the most expensive 8pck of water on Walmart. Anyone can put a listing on Walmart, its the same model as Amazon. There's also a listing right below for bottles twice the size, and a 32 pack for a dollar less.
It cost $0.001 per gallon out of your tap, and you know this..
I'm in South Australia, the driest state on the driest continent, we have a backup desalination plant and water security is common on the political agenda - water is probably as expensive here than most places in the world
"The 2025-26 water use price for commercial customers is now $3.365/kL (or $0.003365 per litre)"
My household water comes from a 500 ft well on my property requiring a submersible pump costing $5000 that gets replaced ever 10-15 years or so with a rig and service that cost another 10k. Call it $1000/year... but it also requires a giant water softener, in my case a commercial one that amortizes out to $1000/year, and monthly expenditure of $70 for salt (admittedly I have exceptionally hard water).
And of course, I, and your municipality too, don't (usually) pay any royalties to "owners" of water that we extract.
Water is, rightly, expensive, and not even expensive enough.
You have a great source of water, which unfortunately for you cost you more money than the average, but because everyone else also has water that precious resource of yours isn't really worth anything if you were to try and go sell it. It makes sense why you'd want it to be more expensive, and that dangerous attitude can also be extrapolated to AI compute access. I think there's going to be a lot of people that won't want everyone to have plentiful access to the highest qualities of LLMs for next to nothing for this reason.
If everyone has easy access to the same powerful LLMs that would just drive down the value you can contribute to the economy to next to nothing. For this reason I don't even think powerful and efficient open source models, which is usually the next counter argument people make, are necessarily a good thing. It strips people of the opportunity for social mobility through meritocratic systems. Just like how your water well isn't going to make your rich or allow you to climb a social ladder, because everyone already has water.
I think the technology of LLMs/AI is probably a bad thing for society in general. Even a full post scarcity AGI world where machines do everything for us ,I don't even know if that's all that good outside of maybe some beneficial medical advances, but can't we get those advances without making everyone's existence obsolete?
I agree water should probably be priced more in general, and it's certainly more expensive in some places than others, but neither of your examples is particularly representative of the sourcing relevant for data centers (scale and potability being different, for starters).
Just for completeness, it's about $0.023/gal in Pittsburgh (1)-- still perfectly affordable but 23x more than 0.001. but still 50x less than Brent crude.
Postel’s Law would put the onus on Google to be forgiving in what it receives. Unsure how you could safely use a sender-created Message-Id for anything anyway.
Following Postel's law results in the normalisation and proliferation of defective implementations. The actual standard becomes irrelevant, and new implementations have to be coded against the defective ones.
My opinion is that Postel's law should be approached in the same way that Linus Torvalds did CVS when designing Git. If in doubt about an implementation decision, consider what Postel's law would recommend, and then do the exact opposite.
Yep. And even a world of perfect good faith, "forgiving in what you receive" has both costs and scaling problems - from researching what "spec" you'll need to design to, to customer service when the added complexity and permissiveness cause interesting stuff to happen.
You sound like the confident techie character in a Michael Crichton novel pronouncing "We've thought of everything there's no way for the demon to escape" shortly before the demon escapes.
The bigger question is why does Time Machine continue use a network file system for backups? It's so fragile you can't rely on it. It's gotten better in recent years, possibly due to APFS, but that just means somewhat longer intervals between disasters (wipe out and reinitialize, losing all your backups). A T.M. using a custom protocol to save and restore blocks would fail sometimes too, but not ruin all your existing backups.
edit: I use Arq for daily backups, but T.M. for hourly. When T.M. eventually craters its storage, I have robust dailies in the cloud, so no worries.
Mounting a file system on a network share tightly couples the client to the server. It’s synchronous and it’s easy to leave the file system in an inconsistent state. Much more robust to build an asynchronous protocol with your own logic as rdbms do. You don’t see rdbms mounting remote file systems do you?
Outsider perspective here (never used Time Machine), but my first thought is that rsync works amazingly both local and over the network. Can't imagine why it being over the network would be a problem. If it can resume a partial transfer and compare checksums to ensure a match, what's the problem?
When I first used my own telescope to view Saturn I had a thought, "they can't fake that!" Photos in a magazine or on television could have been faked. And the moon landing truthers became unhinged about it. I had not appreciated that I subtly and unconsciously held a reservation about the truth of it all.
This is the modern epistemic crisis. And wait till Elon implants a brain computer interface in you. You won't even fully trust your eye looking through a telescope.
You should see my Documents folder.
reply