I think you're wrong, and you're underestimating the transformational impact of Ad-Words.
Free internet existed before paid internet, true, but mostly because people did things for other motives (like fun). Altavista was a tech demo for DEC. Good information was found on personal web pages, most often on .edu sites.
Banner ads existed, but they were confined to the sketchy corners of the Internet. Thing today's spam selling viagra. Anyone credible didn't want to be associated with them.
What Google figured out was:
1) Design. Discrete ad-words didn't make them look sketchy. This discovery came up by accident, but that's a longer story.
2) Targeting. Search terms let them know what to ads to show.
I can't overstate the impact of #2. Profits went up many-fold over prior ad models. This was Google's great -- and ultra-secret -- discovery. For many years, they were making $$$, while cultivating a public image of (probably) bleeding $$$ or (at best) making $. People were doing math on how much revenue Google was getting based on traditional web advertising models, while Google knew precisely what you were shopping for.
By the time people found out how much money Google's ad model was making, they had market lock-in.
John describes exactly what I'd like someone to build:
"To make something really different, and not get drawn into the gravity well of existing solutions, you practically need an isolated monastic order of computer engineers."
As a thought experiment:
* Pick a place where cost-of-living is $200/month
* Set up a village which is very livable. Fresh air. Healthy food. Good schools. More-or-less for the cost that someone rich can sponsor without too much sweat.
* Drop a load of computers with little to no software, and little to no internet
* Try reinventing the computing universe from scratch.
Love this idea and wondering where that low cost of living place would be. But genuinely asking;
What problem are we trying to solve that is not possible right now? Do we start from hardware at the CPU ?
I remember one of an ex Intel engineer once said, you could learn about all the decisions which makes modern ISA and CPU
uArch design, along with GPU and how it all works together, by the time you have done all that and could implement a truly better version from a clean sheet, you are already close to retiring .
And that is assuming you have the professional opportunity to learn about all these, implementation , fail and make mistakes and relearn etc.
> Love this idea and wondering where that low cost of living place would be
Parts of Africa and India are very much like that. I would guess other places too. I'd pick a hill station in India, or maybe some place higher up in sub-Saharan Africa (above the insects)
> What problem are we trying to solve that is not possible right now?
The point is more about identifying the problem, actually. An independent tech tree will have vastly different capabilities and limitations than the existing one.
Continuing the thought experiment -- to be much more abstract now -- if we placed an independent colony of humans on Venus 150 years ago, it's likely computing would be very different. If the transistor weren't invented, we might have optical, mechanical, or fluidic computation, or perhaps some extended version of vacuum tubes. Everything would be different.
Sharing technology back-and-forth a century later would be amazing.
Even when universities were more isolated, something like 1995-era MIT computing infrastructure was largely homebrew, with fascinating social dynamics around things like Zephyr, interesting distributed file systems (AFS), etc. The X Window System came out of it too, more-or-less, which in turn allowed for various types of work with remote access unlike those we have with the cloud.
And there were tech trees build around Lisp-based computers / operating systems, SmallTalk, and systems where literally everything was modifiable.
More conservatively, even the interacting Chinese and non-Chinese tech trees are somewhat different (WeChat, Alipay, etc. versus WhatsApp, Venmo, etc.)
You can't predict the future, and having two independent futures seems like a great way to have progress.
Plus, it prevents a monoculture. Perhaps that's the problem I'm trying to solve.
> Do we start from hardware at the CPU ?
For the actual thought experiment, too expensive. I'd probably offer monitors, keyboards, mice, and some kind of relatively simple, documented microcontroller to drive those. As well as things like ADCs, DACs, and similar.
Whatever expertise you need to prune a working system is less than the expertise you'll need to create a whole new one and then also prune it as it grows old
Software is bloated in part because it's built in layers. People wrap things over, and over, and over. Stripping down layers is neigh-impossible later. Starting from scratch is easy.
Starting from scratch fails in practice because you don't get feature parity in time short enough for VC (or grant) funding cycles.
If we build a tech tree around 200MHz 32MB machines, except for things like ML and video, we'd have a tech tree which did everything existing machines do, only 10x more quickly in 0.1% of the memory. Machines back then were fine for word processing, spreadsheets, all the web apps I use on a daily basis (not as web apps), etc.
Need would drive people to rebuild those, but with a few less layers.
Continuing the thought experiment: There's an interesting sort-of contradiction in this desire: I, being dissatisfied with some aspect of the existing software solutions on the market, want to create an isolated monastic order of software engineers to ignore all existing solutions and build something that solves my problems; presumably, without any contact from me.
Its a contradiction very much at the core of the idea: Should I expect that the Operating System my monastic order produces be able to play Overwatch or be able to open .docx files? I suspect not; but why? Because they didn't collaborate with stakeholders. So, they might need to collaborate with stakeholders; yet that was the very thing we were trying to avoid by making this an isolated monastic order.
Sometimes you gotta take the good with the bad. Or, uh, maybe Microsoft should just stop using React for the Start menu, that might be a good start.
>maybe Microsoft should just stop using React for the Start menu, that might be a good start.
Agree but again worth pointing out the obvious. I don't think anyone is actually against React per se, as long as M$ could ensure React render all their screens at 120fps with No Jank, 1-2% CPU resources usage, minimal GPU resources, and little memory usage. All that at least 99.99% of the time. Right now it isn't obvious to me this is possible without significant investment.
Not saying these are perfect, but consider reviewing the work of groups like the Internet Society or even IEEE sectors. Boots on the ground to some extent such as providing gear and training. Other efforts like One Laptop Per Child also leaned into this kind of thinking.
What could it could mean for a "tech" town to be born, especially with what we have today regarding techniques and tools. While the dream has not really bore out yet (especially at a village level), I would argue we could do even better in middle America with this thinking; small college towns. While its a bit of existing gravity well, you could do a focused effort to get a flywheel going (redo mini Bell labs around the USA solving regional problems could be a start).
Yes it takes decades. My only thought on that is, many (dare say most) people don't even have short term plans much less long term plans. It takes visionaries with nerves and will of steel to stay on paths to make things happen.
Pick a university, and given them $1B to never use Windows, MacOS, Android, Linux, or anything other than homebrew?
To kick-start, given them machines with Plan9, ITS, or an OS based on LISP / Smalltalk / similar? Or just microcontrollers? Or replicate 1970-era university computing infrastructure (where everything was homebrew?)
Build out coursework to bootstrap from there? Perhaps scholarships for kids from the developing world?
They will just face the same problems we solved decades ago and reinvent the mostly similar solution we also had decades ago.
In a few decades, they will reach to our current level, but then, rest of our world didn't idle for these decades and we don't need to solve the old problems.
> The spend prices most of the developing world out -- an programmer earning $10k per year can't pay for a $200/month Claude Max subscription..
No, but a computer earning $10k per year can probably afford a $200 used ThinkPad, install Linux on it, build code that helps someone, rent a cheap server from a good cloud provider, advertise their new SaaS on HN, and have it start pulling in enough revenue to pay for a $200 Claude Max subscription.
> It's the mainframe era all over again, where access to computing is gated by $$$.
It's still the internet era, where access to $$$ is gated by computing skill :)
I always consider different options when planning for the future, but I'll give the argument for exponential:
Progress has been exponential in the generic. We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000, as the prior million, and so on, all the way back to multicellular life evolving over 2 billion years or so.
There's a question of the exponent, though. Living through that exponential growth circa 50AD felt at best linear, if not flat.
Consider theoretical physics, which hasn't significantly advancement since the advent of general relativity and quantum theory.
Or neurology, where we continue to have only the most basic understanding of how the human mind actually works (let alone the origin of consciousness).
Heck, let's look at good ol' Moore's Law, which started off exponential but has slowed down dramatically.
It's said that an S curve always starts out looking exponential, and I'd argue in all of those cases we're seeing exactly that. There's no reason to assume technological progress in general, whether via human or artificial intelligence, is necessarily any different.
> We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000
I hear this sort of argument all the time, but what is it even based on? There’s no clear definition of scientific and technological progress, much less something that’s measurable clearly enough to make claims like this.
As I understand it, the idea is simply “Ooo, look, it took ten thousand years to go from fire to wheel, but only a couple hundred to go from printing press to airplane!!!”, and I guess that’s true (at least if you have a very juvenile, Sid Meier’s Civilization-like understanding of what history even is) but it’s also nonsense to try and extrapolate actual numbers from it.
Plotting the highest observable assembly index over time will yield an exponential curve starting from the beginning of the universe. This is the closest I’m aware of to a mathematical model quantifying the distinct impression that local complexity has been increasing exponentially.
> Another possibility that has long been on my personal list of “future articles to write” is that the future of computing may look more like used cars. If there is little meaningful difference between a chip manufactured in 2035 and a chip from 2065, then buying a still-functional 30-year-old computer may be a much better deal than it is today. If there is less of a need to buy a new computer every few years, then investing a larger amount upfront may make sense – buying a $10,000 computer rather than a $1,000 computer, and just keeping it for much longer or reselling it later for an upgraded model.
This seems improbable.
50-year-old technology works because 50 years ago, transistors were micron-scale.
Nanometer-scale nodes wear out much more quickly. Modern GPUs have a rated lifespan in the 3-7 year range, depending on usage.
One of my concerns is we're reaching a point where the loss of a fab due to a crisis -- war, natural disaster, etc. -- may cause systemic collapse. You can plot lifespan of chips versus time to bring a new fab online. Those lines are just around the crossing point; modern electronics would start to fail before we could produce more.
> Modern GPUs have a rated lifespan in the 3-7 year range, depending on usage.
That statement absolutely needs a source. Is "usage" 100% load 24/7? What is the failure rate after 7 years? Are the failures unrepairable, i.e. not just a broken fan?
I’ve never heard of this and I was an Ethereum miner. We pushed the cards as hard as they would go and they seemed fine after. As long as the fan was still going they were good.
So Intel used to claim a 100,000+ hour life time on their chips. They didnt actually test them to this because that is 11.4 years. But it was basically saying, these things will last at full speed way beyond any reasonable life time. Many chip could probably go way beyond that.
I think it was about 15 years back they stopped saying that. Once we passed the 28nm mark it started to become apparent that they couldnt really state that.
It makes sense, as parts get smaller they will get more fragile from general usage.
With your GPUs yeah they are probably still fine but they could already be half way through their life time, you wouldnt know it until failure point. Add in the silicon lotto and it gets more complicated.
One thing to realize is the lifetime is a statistical thing.
I design chips in modern tech nodes (currently using 2nm). What we get feom the fab is a statistical model of device failure modes. Aging is one of them. When transistors gradually age they get slower sue to increased threshold voltage. This eventually causes failure at a point where timing is tight. When will it happen varies greatly sue to initial conditions, exact conditions the chip was in(temp, vdd, number of on-off cycles, even the workload). After an agong failure the chip will still work if the clock freq is reduced. There are aging monitors on-chip sometimes which try to catch it early and scale down the clock.
There are catastrophic failures too, like gate insulator breakdown, electromigration or mechanical failures of IO interconnect. The last one is orders of magnitude more likely than anything else these days.
For mining, If a GPU was failing in such a way that it was giving completely wrong output for functions during mining, that'd only be visible as a lower success hash-rate which you might not even notice unless you did periodic testing of known-target hashes.
For graphics, the same defect could be severe enough to completely render the GPU useless.
Yeah, chip aging is at max temperature, max current, and worst process corner. And it's nonlinear so running at <10% duty cycle could reduce aging to almost nothing.
Every now and then, I get a heartfelt chuckle from HN.
By 'Modern' they must mean latest generation, so we'll have to wait and see. I was imagining not using an RTX 5090 for 7 years and find it doesn't work, or one used 24x7 for 3 years then failing.
> Nanometer-scale nodes wear out much more quickly. Modern GPUs have a rated lifespan in the 3-7 year range, depending on usage.
I recently bought a new MacBook, my previous one having lasted me for over 10 years. The big thing that pushed me to finally upgrade wasn’t hardware (which as far as I could tell had no major issues), it was the fact that it couldn’t run latest macOS, and software support for the old version it could run was increasingly going away.
The battery and keyboard had been replaced, but (AFAIK) the logic board was still the original
> it couldn’t run latest macOS, and software support for the old version it could run was increasingly going away.
which is very annoying, as none of the newer OS versions has anything that warrants dumping hardware to buy brand new to run them with! With the exception of security upgrades, which i find dubious for a company to stop creating (as they would need to do so for their newer OS versions just as well, so the cost of maintaining security patches ought to not be much, if at all), it is definitely more likely to be a dark-pattern to force hardware upgrades.
That's not just a dark pattern, it's the logical conclusion to Apple's entire business model. It's what you get for relying on the proprietary OS supplied by a hardware manufacturer. It's why Asahi Linux is so important.
"regularly" is doing a lot of work here. When Linux drops hardware support, we are talking about ancient hardware. An example of a regular drop: Linux 6.15 just a month ago dropped support for 486 (from 1989)!
That's surprising. What is the 486 missing that Linux needs? Or is it that there are no volunteers to test and maintain Linux on a 486 (as often happens with older architectures)?
Open source software drops hardware support only when there are nobody left who volunteers to support that hardware. When does this happen? It happens when there are not enough users left of that hardware.
As long as there are enough users of some hardware, free software will support it, because the users of that hardware want it to.
Depending on how much has changed in the interval, backporting security fixes can be completely trivial, very difficult, or anywhere in between. There may not even be a fix to backport, as not all vulnerabilities are still present in the latest release.
You mean besides the fact that they completely transitioned to a new processors and some of the new features use hardware that is only available on their ARM chips?
Also he said that software from third parties also don’t support the older OS so even if Apple did provide security updates, he would still be in the same place.
I've got 3 Macbooks from 2008, 2012, and 2013. Apple dropped MacOS support years ago. They all run the latest Fedora Linux version with no problems.
The screen on the MacBookPro10,2 is 2560x1600 which is still higher than a lot of brand new laptops. The latest version it will run is 10.15 from 2019. I know Apple switched to ARM but most people don't need a new faster computer. I stopped buying Apple computers because I want my computer supported more than 6 years.
I do have 3 newer computers but these old Macbooks are kept at various relative's houses for when I visit and wnat my own Linux machine. They have no problems running a web browser and watching videos so why replace them?
> Modern GPUs have a rated lifespan in the 3-7 year range, depending on usage.
I seriously doubt this is true. The venerable GTX 1060 came out 9 years ago, and still sees fairly widespread use based on the Steam hardware survey. According to you, many (most?) of those cards should have given out years ago.
This is just untrue, and you’ve provided no citation, either.
The silicon gates in GPUs just don’t wear out like that, not at that timescale. The only thing that sort of does is SSDs (and that’s a write limit, which has existed for decades, not a new thing).
Electromigration tends to get worse with small sizes but also higher voltage and temperatures. I could see a GPU wearing out that quickly if it were overclocked enough, but stock consumer GPUs will last much longer than that.
since electromigration is basically a matter of long, high-current interconnect, I guess I have been assuming it's merely designed around. By, for instance, having hundreds of power and ground pins, implying quite a robust on-chip distribution mesh, rather than a few high-current runs.
Wouldnt it depend on work loads? My GPU that kicks into high gear for maybe 2-3 hours a week will probably do decades of use before chip degradation kicks in. The power capacitors will give out long before the silicon does.
But it someone is running an LLM 24 hours a day, might not go for as long.
We are flying blind, both on those claiming short life span and those who are not.
> One of my concerns is we're reaching a point where the loss of a fab due to a crisis -- war, natural disaster, etc. -- may cause systemic collapse.
This is absolutely ridiculous. Even if Taiwan sank today we really don't need those fabs for anything critical. i strongly suspect we could operate the entire supply chain actually necessary for human life with just z80s or some equivalent.
Missing data: I don't make a codex PR if it's nonsense.
Poor data: If I make one, I either if I want to:
a) Merge it (success)
b) Modify it (sometimes success, sometimes not). In one case, Codex made the wrong changes in all the right places, but it was still easier to work from that by hand.
It isn't so much "poor" data as it is a fairly high bar for value generation. If it gets merged it is a fairly clear indicator that some value is created. If it doesn't get merged then it may be adding some value or it may not.
Ther may be actually no way to ever know. A baked in bias could be well hidden at many levels. There is no auditing of any statements or products from any vendor. It may not be possible.
When I bought my car, TCO for manual was higher than for automatic. Base purchase price was about $500 lower, but required a pretty frequent maintenance schedule. Automatic was nearly maintenance-free (although the little maintenance required had higher unit costs).
I ran the numbers. Automatic won for cost.
For a cheap car, manual makes little sense for a rational consumer.
Manufacturers are notorious understating maintenance items for automatic transmissions, like "lifetime" fluids and skipping filter changes.
A typical manual maintenance schedule will keep the gearbox running for a very long time. The typical automatic maintenance schedule will keep it alive for its "lifetime", but that lifetime ends up being a few hundred thousands miles and instead of more maintenance at the end of that interval, you end up with a dead transmission.
Expensive cars usually have more power than cheap cars. You can enter a highway, safely overtake someone or climb a steep incline in a larger selection of gears. Also, fancy cars tend to have better automatic gearboxes that behave well in more situations, compounding the advantage.
Cheap fossil cars with shitty automatics can be quite stressful to drive. With a manual clutch and transmission you are in control, know how the car will behave and can relax. It might still be slow, but you know exactly how slow in every situation.
>Cheap fossil cars with shitty automatics can be quite stressful to drive
This is a self inflicted problem. They're programmed for fuel economy (we're talking a small fraction of an MPG here) at the expense of drivability. The might even get worse fuel economy in practice because drivers learn you gotta floor them to tell the compute "no I'm serious, give me the ponies".
> This is a self inflicted problem. They're programmed for fuel economy (we're talking a small fraction of an MPG here) at the expense of drivability.
I find it hard to believe that the Smart car I rented once shifted terribly for fuel economy reasons. It just sucked. I’ve never been so worried that I’d get rear ended leaving a stop sign (during the unbelievably slow shift from first to second), and putting the pedal to the metal didn’t make any difference.
Maybe it sucked too but fuel economy absolutely is a large part of why modern cars all drive like mush.
If you ever have a the opportunity to drive a Nissan from the "hurr durr Nissan CVT bad" era like 2008-12ish it'll feel like a sports car by comparison to just about any modern crossover. "Oh you want revs, let me give you revs"
Free internet existed before paid internet, true, but mostly because people did things for other motives (like fun). Altavista was a tech demo for DEC. Good information was found on personal web pages, most often on .edu sites.
Banner ads existed, but they were confined to the sketchy corners of the Internet. Thing today's spam selling viagra. Anyone credible didn't want to be associated with them.
What Google figured out was:
1) Design. Discrete ad-words didn't make them look sketchy. This discovery came up by accident, but that's a longer story.
2) Targeting. Search terms let them know what to ads to show.
I can't overstate the impact of #2. Profits went up many-fold over prior ad models. This was Google's great -- and ultra-secret -- discovery. For many years, they were making $$$, while cultivating a public image of (probably) bleeding $$$ or (at best) making $. People were doing math on how much revenue Google was getting based on traditional web advertising models, while Google knew precisely what you were shopping for.
By the time people found out how much money Google's ad model was making, they had market lock-in.