Indeed, considering the much of the cost in the end consists of carrying costs, litigation, and year-of-expenditure overruns that were caused by the delay.
Sure, if it's truly planned. I think the tricky part tends to be that it's hard to distinguish between "planned obsolescence" and "incidental obsolescence".
Reddit alone contains about the same quantity of text (~10 billion posts * 10 words per post, vs 1 million books * 100k words per book). Messaging and document platforms (google docs, slack, discord, telegram, etc.) probably each have 1-3 orders of magnitude more than reddit. To your/GP's point though, those private platforms probably haven't been slurped up by LLMs yet.
> The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed.
This quote is completely and totally irrelevant. Nobody is saying they should code a new Outlook. If they did code something, it would be significantly smaller in scope and rigorously tested like spacebound programs in the past were. "New space-engineering-grade code created with actual engineering practices" is absolutely going to be more reliable than "old bloated commercial shitware". But I guess software engineering is a lost art, so it can't be helped.
It's also going to take a hell of a lot longer and cost more than buying an Outlook license. If I was lead on that project, you'd have an uphill battle trying to convince me that spending $100k+ on an email solution unless you can point to specific, serious deficiencies in the existing off the shelf solutions.
Software Engineering is far from a lost art: part of the practice is intelligently making cost-benefit decisions.
The current solution is literally causing problems in space. Space-grade engineering is expensive, but having things go wrong on your already very expensive mission is even more expensive.
Sure, but people who didn't know better until this particular incident do not deserve the title "engineer". Being able to classify and manage risks before they happen is engineering 101.
Engineering requires working around constraints as well - and a major constraint of any project I've worked on was budget. If they wrote a new email client and it had some bug, we'd be laughing about why they didn't use one of the COTS email clients.
I feel like it's a little disingenuous to compare against full-precision models. Anyone concerned about model size and memory usage is surely already using at least an 8 bit quantization.
Their main contribution seems to be hyperparameter tuning, and they don't compare against other quantization techniques of any sort.
I'm not convinced. I wouldn't be surprised if GPT-2 to ChatGPT is the biggest single jump in "machine intelligence" we will ever see. I'd bet all gains in the future will be more incremental, at least until machines surpass humans by a large enough margin that it's difficult to qualify—let alone quantify—how big any given jump is.
Without a big jump, we're just going to boil the frog (ourselves).
reply