To me, built and written are not the same. Built: OK, maybe that's an exaggeration. But could an early "this is pretty good at code" llm have written digitalocean v1? I think it could, yes (no offense Jeff). In terms of volume of code and size of architecture, yeah it was big and complex, but it was literally a bunch of relatively simple cron, bash and perl, and the whole thing was very...sloppy (because we were moving very quickly) - DigitalOcean as I last knew of it (a very long time ago), transformed to a very well written modern go shop. (Source: I am part of the "founding team" or whatever.)
To reinforce that point: we've got the world's most prominent AI promoting company (MSFT), that has finally realized that Windows Explorer is too slow to start.
And this company, with all the formidable powers of AI behind them, can find no way to optimize that other than pre loading the app in memory. And that's for a app that's basically a GUI for `ls`
I think this reflects one of the biggest fallacies behind LLM adoption; the idea that reducing costs for producers improves the state of affairs for consumers too. I've seen someone compare it to the steam engine.
With the steam engine, though, consumers made a trade-off: You pay less, and get (in most cases, I presume) a worse product. With LLMs and other machine learning technologies, maybe if you're paying for the software there's a trade-off (if the software is actually cheaper anyway), but otherwise it doesn't exist. It costs the same amount of money for you to read an LLM-generated article as to read a real one; your internet bill doesn't go down. Likewise for gratis software. It's just worse, with no benefit.
Hacker News is full of producers, in this sense, who often benefit from cutting corners, and LLMs allow them to cut corners, so obviously there are plenty of evangelists here. I saw someone else in this comment section mention that gamers who are not in the tech industry don't like "AI". That's to be expected; they're not the producers, so they're not the ones who benefit.
Yeah it seems pretty obvious that we’re in the mainframe era of transformer models and we’ll soon transition to the personal computer era where these all run on your device, which Apple stands to benefit from the most. Their FoundationModels are actually pretty good at certain tasks
I don't think that's obvious. The marginal return on additional units of compute seems to fall pretty quickly for the vast majority of applications, which increases the benefit of decentralization over the cost of reduced compute. It isn't clear the same is true of intelligence.
In a society that’s built on the foundations of perpetual profit growth it is. Sometimes you just can’t innovate, so instead of improving the product you cut the costs and enshittify. We’re in an enshittification regime right now.
Why are there alternating cycles of innovation and enshittification? I think it’s because investors are always trying to pull forward profit, but because they only have a 10 year horizon on investment strategy they tend to create cycles that are around that same period. If there was less investment, the innovation would be slower but the reactionary enshittification would be lessened too.
So if someone is squatting purpleunicorn.com and I register a trademark for purpleunicorn, I can force them to give me the domain? Or does the trademark have to exist prior to the domain purchase?
> That seems entirely inferable from that wikipedia link.
I did read the link and it wasn’t clear to me which is why I asked.
> How would you have a bad faith intent to profit from a trademark that didn't yet exist?
Isn’t that what most squatters are doing when they purchase something like “zero.ai” or “spectra.ai”? Even if the trademark doesn’t exist yet, they have no intent to use the domain for a business and are assuming someone will want to create a business or trademark with that name one day.
reply