The most effective response is rarely to argue the general case. Instead, acknowledge the concern, offer a brief reframe, and propose one concrete demonstration on the person’s own code. Most concerns are resolved by a single successful experience.
First of all, Google didn't have to write this stuff about Kubernetes, suggesting psychological tricks and magic demonstrations to cajole people into agreeing with you. Kubernetes was happy to discuss the general case - I don't like k8s and don't think they had a bulletproof argument, but they offered a pretty good one. What Anthropic is doing here is very very weird. I said "Scientology" earlier and I was not kidding.
Part of the reason LLMs have led me to tear out so much of my own hair is how many people seem to have made it through four years of STEM college without developing any scientific thinking ability whatsoever. A truly stunning number of people have been wowed by "a single successful experience." Actually that section is full of horrible logic:
Concern
"I am faster without it."
Suggested response
That is likely true for code the person writes routinely. Suggest trying it on the work they tend to avoid: legacy files, unfamiliar services, or test scaffolding, where the leverage is highest.
Evidence to offer
Time one tedious task both ways and compare.
This isn't just unscientific and manipulative: it's really goddamn annoying! If someone times me at 1.5 hours reading about and learning an unfamiliar service, and smugly says Claude learned it in 12 seconds of "thinking," either my laptop or a certain Claude Champion is getting thrown out the window.
This Scientology-ass blog aligns startlingly well with my hypothesis that certain tech workers (including CEOs like Dario Amodei and Satya Nadella) are excessively enamored with LLMs because of a fundamental spiritual emptiness and ignorance.
Imagine calling yourself a "Champion" and dispensing nuggets of wisdom like this:
>> When a colleague asks how you accomplished something, the most useful response is the prompt you actually used. They will learn more from running that prompt against their own problem than from any description you could write, and it gives them something they can act on immediately.
Colleague: How did you get it to find that race condition?
Champion: I asked, "The test in @tests/scheduler.test.ts is flaky, figure out why," and it traced two unjoined promises in the scheduler. Try the same phrasing on your test.
People quickly became too embarrassed to call themselves "prompt engineers." I don't think anyone is jumping at the bit to be the office Claude Champion.
I am a musician and deeply morally opposed to any form of generative AI that has unauthorized training data. That means I am morally opposed to any useful system. This seems to be a majority opinion among creatives: they are plagiarism machines built on stolen data. Using them for anything is always unacceptable.
>> Notepad++ for macOS is maintained by Andrey Letov, who wrote the Objective-C++ Cocoa UI that replaces Notepad++'s Win32 front-end. The app is available to download from the Notepad++ website.
That is not the Notepad++ website! It's some other website. I understand that this is a fairly legitimate and professional port. But this framing is unacceptable. It's especially grating considering "Notepad++" is trademarked in France: https://data.inpi.fr/marques/FR5133202 [1]. The software is GPL but that doesn't mean you can slap the trademark on any derived codebase - legally problematic in France, but it's disrespectful worldwide. The Mac port really should have been released under a similar but clearly distinct name, and MacRumors should have been way more responsible about framing the story.
FWIW I also think an underappreciated advantage is Windows Server (last I checked that was still rock-solid) and Active Directory. Lots of CIOs / CTOs would correctly veto a move off of these, absent a specific technical problem. This is really more of a "hard knocks" lesson than anything fundamental to operating system design or implementation, but: the two Linux shops I worked at got at least a little sloppy about the sudoers list, or got frustrated and gave too much access to a "shared" folder, etc etc, largely because the admins got fed up with all the Mother May-I-ing. It just seemed to inevitably turn into a mess; sometimes that mess is fun and even productive, sometimes it's actually unacceptable.
Even the research hospital I worked at had a proper SELinux setup on the Red Hat installations, but by-quantity most servers were CentOS and it was way more of a free-for-all than it should have been, e.g. I was the fed-up admin when I was really not qualified! I screwed up a lot. Not that big of a deal: this was research-related computing and deidentified data. All the clinical computing was Windows Server. That is not a coincidence, it is really a market difference.
As someone who hates Windows 11... I do like the core Windows kernel, and would much rather do IT on Windows machines than Linux machines. Windows NT is very fussy and a bit bloated, but a huge part of that is an admirable commitment to backwards compatibility; a lot of XP applications run fine on Windows 11, except DPI wonkiness. And Windows' driers advantage isn't just commercial support; the kernel is fundamentally leaner and faster than Linux at real-time IO, and better about cleanly isolating driver processes across privilege levels. Very broadly, compared to Linux I find administering Windows easier to navigate and harder to screw up, especially with handling user permissions. Surely part of this is what I grew up with, but there's also a values difference: a lot of Linux users like how low-friction it can be since the OS doesn't get in your way. I kind of like that Windows makes you turn an excessive number of disarming keys... even when I am frustrated by it.
It does make me quite sad that the only real general-use OS options are the apex of a 20th-century operating system family, Apple's version of that, and a truly 21st-century monolith-microkernel hybrid whose specific design is a mystery to public science.
They're referring to the Windows kernel; see the preceding paragraph on the Windows kernel - the three general purpose OS families are Linux, macOS, Windows.
Personally I think not enough credit to macOS here; Apple's Mach/XNU has been microkernel flavored since the NeXT days and many subsystems run in userspace like Windows.
Last years Crowdstrike outage never hit any of the macOS computers with CS installed because on macOS the Crowdstrike agent runs entirely in userspace thanks to the Endpoint Security framework.
Really the security of macOS is probably the best of all of the desktop OSes, and as annoying as it can be.
"The" future of software engineering is a silly thing to predict. I might predict one substantial change is that we get our house a little more in order about universities and the private sector distinguishing between computer science, software engineering, and software development. Obviously they are not cleanly separated[1], but LLMs will affect each subfield very differently.
- The impact on computer science seems almost entirely negative so far: mostly the burden of academic wordslop, though an additional negative impact is AI sucking all the air out of the room. What's worse is how little interesting computer science has come out of the biggest technological development with computers in many years: in fact there has been a terrible and very sudden regression of scientific methodology and integrity, people rationalizing unscientific thinking and unprofessional behavior by pointing to economic success. I think it'll take decades to undo the damage, it's ideological.
- The impact on software development actually does seem a bit positive. I am not really a software developer at all. It always felt too frustrating :) However the easing of frustration might be offset by widespread devastation of new FOSS projects. I don't want to put my code online, even though I'm not monetizing it. I'm certainly not alone. That makes me really sad. But I watched ChatGPT copy-paste about 200 lines of F# straight from my own GitHub, without attribution. I'm not letting OpenAI steal my code again.
- Software engineering... it does not seem like any of these systems are actually capable of real software engineering, but we are also being adversely affected by an epidemic of unscientific thinking. Speaking of: I would like to see Mythos autonomously attempt a task as complex and serious as a C compiler. Opus 4.6 totally failed (even if popular coverage didn't portray it as such):
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
"Future of software engineering" folks should stuff like this in mind. What model is going to undo Mythos's mess? What if that mess is your company's product? Hope you know some very patient humans!
[1] They should have different educational tracks. There is no reason why a big fancy school like MIT can't have computer scientists do something like SICP and software engineers do the applied Python class. Forcing every computer professional into "computer science" is just silly; half the students gripe about how useless this theory is, the other half gripe about how grubby the practice is. What really sucks here is that I think Big Tech would support the idea, we're just stuck in a weird social rut.
I feel like LLMs[1] are going to cause a kind of "divorce" between those who love making software and those who love selling software. It was difficult for these two groups to communicate and coordinate before, and now it is _excruciating_. What little mutual tolerance and slack there was, is practically gone.
Open source was always[2] a fragile arrangement based on the kind of trust that involves looking at things through one's fingers (turning a blind eye may be more idiomatic in English), and we are at the point where you just have to either shut your eyes, or otherwise stop pretending that the situation can be salvaged at all.
Just a thought I had: some people think that LLM-shaming is declasse, and maybe it is, but I think that perhaps we _should_ LLM-shame, until the AI-companies train their LLMs to actually give attribution, if nothing else (I mean if it can memorize entire blocks of code, why can't it memorize where it saw that code? Would this not, potentially, _improve_ the attribution-situation, to levels better than even the pre-LLM era? Oh right, because plagiarism might actually be the product).
[1]: Not blaming the tech itself, but rather the people who choose to use it recklessly, and an industry that is based almost entirely on getting mega-corporations to buy startups that, against the odds, have acquired a decent number of happy-ish customers, that can now be relentlessly locked-in and up-sold to.
Yep. It's not the best intro to C, for sure. I learned C when I was a teenager, from K&R C second edition, and several Amiga-specific books ("Amiga C for Beginners", for one.)
I'm not reading that whole thing right now (that is a lot of text), but I read as far as "What Is a Variable?" and nothing stands out as bad or AI-written. What problems do you see?
Indeed! I read through a couple of paragraphs. Each begins with a bloated introduction where each sentence repeats same idea many times in different words. Lot's of bullets repeating same statement. That's exactly how LLM scam looks like. The whole book is full of water. It can be reduced in size by a factor of 5.
Being LLM slop in particular, this book cost way more of my time than it should have. It really does look superficially competent, until you realize that competence is paragraph-to-paragraph, not section-to-section.
The scam is not stating up front "this was written by an LLM and I haven't read it." The dishonesty is claiming this book will teach you such-and-such when the author actually has no idea. It really is a scam. Even if he's not making anything directly, he's already earned 150 stars in the GitHub pseudoeconomy, plus good word-of-mouth from people who are lazy and thoughtless like he is, people who assumed that 4,500 pages about FreeBSD from a lead FreeBSD maintainer must be worth something and didn't bother to check if it was written by an LLM... even though it's 2026.
This book has negative value. It is actively destructive to FreeBSD, even if in the short term it boosts the author's public profile.
> This book has negative value. It is actively destructive to FreeBSD, even if in the short term it boosts the author's public profile.
I won't be that radical, the book still has value. There are many useful code samples with descriptions and explanations of concepts I did not know before. But to get to them one has to dig through a forrest of useless tokens. Someone has to pass it through an LLM and publish distilled edition. :-)
First of all this is an entire book, it's 76,000 words. But look at the first nontrivial example of C after "hello world," under "Bonus learning point about C return values"
This teaches nobody anything. I am sorry but this project is completely useless and there's no way Brandi read a single word of it. This entire book is a dishonest AI scam. I hate LLMs. It is hard to think of another computer technology that has done so much damage for so little good.
Edit: I mean look at the intro to for loops. This is supposed to be for total beginners. Example 1:
for (int i = 0; i < 10; i++) {
printf("%d\n", i);
}
>> Start at i = 0
>> Repeat while i < 10
>> Increment i each time by 1 (i++)
Example 2:
for (i = 0; n > 0 && i < IFLIB_MAX_RX_REFRESH; n--, i++) {
struct netmap_slot *slot = &ring->slot[nm_i];
uint64_t paddr;
void *addr = PNMB(na, slot, &paddr);
/\* ... work per buffer ... \*/
nm_i = nm_next(nm_i, lim);
nic_i = nm_next(nic_i, lim);
}
>> What this loop does
>> * The driver is refilling receive buffers so the NIC can keep receiving packets.
>> * It processes buffers in batches: up to IFLIB_MAX_RX_REFRESH each time.
>> * i counts how many buffers we've handled in this batch.
n is the total remaining buffers to refill; it decrements every iteration.
>> * For each buffer, the code grabs its slot, figures out the physical address, readies it for DMA, then advances the ring indices (nm_i, nic_i).
>> * The loop stops when either the batch is full (i hits the max) or there's nothing left to do (n == 0). The batch is then "published" to the NIC by the code right after the loop.
>> In essence, a for loop is the go-to choice when you have a clear limit on how many times something should run. It packages initialisation, condition checking, and iteration updates into a single, compact header, making the flow easy to follow.
Total garbage. This has literally zero educational value. I assume Brandi is just trying to make a quick buck, he truly has not even glanced at the output. He should be ashamed of himself.
The jump in complexity from "example 1" to "example 2" is quite high, introducing unexplained concepts (like pointers...) It is unsuitable for one learning the language.
its FREE. no ads nothing.
you are realy anti LLM and thats fine but dont let yourself be total blinded.
you yourself chose to spend time on something to your own frustration even though at that point you already knew it wasnt for you. frustrating yourself further trying to find examples to help frustrate others too.
look at how you are behaving and then realise you are saying someone else should be ashamed of themselves.
If you disagree with the book a simple excerpt and note would suffice. if it's 'super clearly bad' it does not need to contain a load of emotions to transmit that message.
That's not what the parent comment meant. They meant checking the Lean-language definitions actually match the mathematical English ones, and that the Lean theorems match the ones in the paper. If that's true then you don't actually need to check the proofs. But you absolutely need to check the definitions, and you can't really do that without sufficient mathematical maturity.
Yes, and the child comment’s point is that formalizing the problem is likely easier than having the LLM verify that each step of a long deduction is correct, which is why Lean might be helpful.
But both of you are ignoring the parent comment! Actually you're ignoring the context of the thread.
Originally someone said "I wish I was math smart to know if [this vibe-mathematics proof] worked or not." They did NOT say "I'd like to check but I am too lazy." Suggesting "ask it to formalize it in Lean" is useless if you're not mathematically mature enough to understand the proof, since that means you're not mathematically mature enough to understand how to formalize the problem.
Then "likely easier" is a moot point. A Lean program you're not knowledgeable enough to sanity-check is precisely as useless as a math proof you're not knowledgeable enough to read.
It’s not useless, because you can, for example, ask multiple frontier models to do the formalization and see if they agree. And if they have surface-level differences in formalization, you can also ask them whether apparently-different definitions are equivalent.
This isn’t perfect of course - perhaps every single model is wrong. But you are too quick to declare that something isn’t useful for arriving at an answer. Reducing the surface area of what needs to be checked is good regardless.
The point of the article is that sometimes the "old ways" really means "not particularly profitable or necessary in the short term" but the bill comes due in a crisis. The reason US/EU manufacturing was "the old ways" is that people could make easier money with financial engineering, an insight that extended all the way to Raytheon.
COBOL is a bad example, but higher-level languages vs. assembly is not. If you write a lot of C you really don't need to know assembly.... until you stumble across a weird gcc bug and have no clue where to look. If you write a lot of C# you don't really need to know anything about C... until your app is unusably slow because you were fuzzy on the whole stack / heap concept. Likewise with high-level SSGs and design frameworks when you don't know HTML/CSS fundamentals.
As the author says maybe AI is different. But with manufacturing we were absolutely confusing "comfortable development" with "progress." In Ukraine the bill came due, and the EU was not actually able to manufacture weapons on schedule. So people really should have read to the end of "building a C compiler with a team of Claudes":
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
> sometimes the "old ways" really means "not particularly profitable or necessary in the short term" but the bill comes due in a crisis.
yes of course. that's why I said
> If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost.
This is all known, because it's always been this way. You can't hire a blacksmith, you need to first REMAKE the blacksmith if you really need one. It's always been this way, and it will continue. There is a cost to resurrecting old processes. This cost is a fact of life and needs to be planned for.
It cannot be avoided except by maintaining some kind of "strategic reserve" of thousands or millions people who sit around building things nobody wants on the off chance they might be needed again -- which a democracy will not long have the patience to continue paying for.
Many things we must know to do our jobs are themselves artifacts of historical decisions made in a time and place that no longer makes sense, but we have to know them to do our jobs.
Claude has allowed me to jettison many useless (IMO) skills I've developed over the years. I'm quite happy to let my bank of CSS and regex trivia expire from the cache, never to be reloaded again. I will never have to write another webpack.config.js as long as I live. So much time in programming is spent looking up SDK operations that I basically know, I just can't remember whether the dang method is called acquire_data() or load_data() .... etc
But these are hard IT things a human programmer really struggles with as well. What % of software written is that? Very very low. Most software is dull and requires business vagueness to be translated into deterministic logic and interfaces; LLMs are pretty great at that as it is. If humans use their old ways to fix complex problems and llms do the rest, we still only need a handful of those humans. For now.
"For now" is sort of the entire point of the article :)
Even in the Before Times, it was much cognitively cheaper to write code than it is to read someone else's code closely, or manage lots of independent code across a team, or to make a serious change to existing code. It's so much easier to just let everyone slap some slop on the pile and check off their user stories. I think it will take years to figure out exactly what the impact of LLMS on software is. But my hunch is that it'll do a lot of damage for incremental benefit.
With the sole exception of "LLMs are good at identifying C footguns," I have yet to see AI solve any real problems I've personally identified with the long-term development and maintenance of software. I only see them making things far worse in exchange for convenience. And I am not even slightly reassured by how often I've seen a GitHub project advertise thousands of test cases, then I read a sample of those test cases and 98% of them are either redundant or useless. Or the studies which suggest software engineers consistently overestimate the productivity benefits of AI, and psychologically are increasingly unable to handle manual programming. Or the chardet maintainer seemingly vibe-benchmarking his vibe-coded 7.0 rewrite when it was in reality a lot slower than the 6.0, and he's still digging through regression bugs. It feels like dozens of alarms are going off.
These are good point and I am not overestimating; we are simply seeing the productivity boost in our company and the rise in profitability. We practice TDD, but only at integration level, so we have tests upfront for api and frontend and the AI writes until it works. SOTA models are simply good enough not to do;
function add(a,b) = c // adds two numbers
test: add(1,2)=3
to implement
function add(a,b) return 3
So when you have enough tests (and we do), it will deliver quality. Having AI write the tests is mostly useless. But me writing the code is not necessarily better and certainly not faster for most cases our clients bring us.
Part of the reason LLMs have led me to tear out so much of my own hair is how many people seem to have made it through four years of STEM college without developing any scientific thinking ability whatsoever. A truly stunning number of people have been wowed by "a single successful experience." Actually that section is full of horrible logic:
This isn't just unscientific and manipulative: it's really goddamn annoying! If someone times me at 1.5 hours reading about and learning an unfamiliar service, and smugly says Claude learned it in 12 seconds of "thinking," either my laptop or a certain Claude Champion is getting thrown out the window.reply