Hacker Newsnew | past | comments | ask | show | jobs | submit | samiv's commentslogin

AI will naturally draw people who are lazy and not interested in learning.

It's like flipping through a math book and nodding to yourself when you look at the answers and thinking you're learning. But really you aren't because the real learning requires actually doing it and solving and struggling through the problems yourself.


This is just completely inaccurate. There is more to learn now than ever before, and I find myself spending more and more time teaching myself things that I never before would have been able to find time to understand.

This is just completely inaccurate. There's the same amout of information available as before. It's not like LLMs provide you with information that isn't available anywhere else.

But I agree that it can serve as a tool for a person who it's interested in learning but I bet you that for every such person there's 10x as many who are happy to outsource all their thinking to the machine.

We already have reports from basically every school in the world struggling with this exact problem. Students are just copy pasting LLMs and not really learning.


As a principal engineer I feel completely let down. I've spent decades building up and accumulating expert knowledge and now that has been massively devalued. Any idiot can now prompt their way to the same software. I feel depressed and very unmotivated and expect to retire soon. Talk about a rug pull!

My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.


Nah man. I understand the frustration, but this is a glass is half empty view.

You have decades of expert knowledge, which you can use to drive the LLMs in an expert way. Thats where the value is. The industry or narrative might not have figured that out yet, but its inevitable.

Garbage in, garbage out still very much applies in this new world.

And just to add, the key metric to good software hasn't changed, and won't change. It's not even about writing the code, the language, the style, the clever tricks. What really matters is how well does the code performs 1 month after it goes live, 6 months, 5 years. This game is a long game. And not just how well does the computer run the code, but how well can the humans work with the code.

Use your experience to generate the value from the LLMs, cuase they aren't going to generate anything by themselves.


Glass half empty view? Their whole skill set built up over decades, digitized, and now they have to shift everything they do, and who knows humans will even be in the loop, if they’re not c-suite or brown nosers. Their whole magic and skill is now capable of being done by a PM in 5 minutes with some tokens. How is that supposed to make skillful coders feel?

Massive job cuts, bad job market, AI tools everywhere, probable bubble, it seems naive to be optimistic at this juncture.


The world changes. Time marches on, and the very skills you spend your time developing will inevitably expire in their usefulness. Things that were once marvelous talents are now campfire stories or punchlines.

LLMs may be accelerating the process, but definitely not the cause.

If you want a career in technology, a durable one, you learn to adapt. Your primary skill is NOT to master a given technology, it is the ability to master a given technology. This is a university that has no graduation!


Is it though? If it was that universal, we'd employ the best programmers as plumbers, since they have the best ability to master plumbing technology. There are limits, and I think the skill being to master programming technologies is a reasonable limit.

If you're a great programmer, can you can stop using Angular and master React? Yes. Can you stop telling the computer what to do, and master formal proof assistants? Maybe. Can you stop using the computer except as a tool and go master agricultural technology? Probably not. (Which is not to say you can't be a good programmer at an agritech company)


The “this wrecked my industry” sob story is especially rich when the vast majority of tech workers ability to demand premium salaries comes directly from creating software that makes existing jobs obsolete.

Let’s talk about the industries the computer killed: travel agents, musician, the entire film development industry, local newspapers built on classified ads, the encyclopedia industry, phone operators, projectionists, physical media industries, and a few dozen other random industries.

We aren’t special because we are coders. Creativity and engineering thoughtfulness will still exist even with LLMs, it will just take a different form.


Since I love programming, I feel pretty lucky I got to live and work in the only few decades in which it's economically viable to work as a computer programmer. At least "musician" had a longer run, but I guess we had it coming.

What exactly would people retrain into? The future these companies explicitly want is AI taking ALL the jobs, It's not like PMs are going to be any safer, or any other knowledge work. I see little evidence that AI is going to create new jobs other than a breathless assurance that it "always happens"

No, retraining has been tested and found to be unfeasible. Even if you throw money at it.

> Their whole skill set

This is the fundamental problem with how so many people think about LLMs. By the time you get to Principal, you've usually developed a range of skills where actual coding represents like 10% of what you need to do to get your job done.

People very often underestimate the sheer amount of "soft" skills required to perform well at Staff+ levels that would require true AGI to automate.


Yeah well. That's what we've been doing to other industries over and over.

I remember a cinema theater projectionist telling me exactly that while I was wiring a software controlling numeric projector, replacing the 35mm ones.


If a principal doesn't have the skills to mentor juniors, plan and define architecture, review work and follow a good process, they really shouldn't be considered a principal. A domain expert? Perhaps. A domain expert should fear for their job but a principal should be well rounded, flexible, and more than capable of guiding AI tooling to a good outcome.

> Their whole magic and skill is now capable of being done by a PM in 5 minutes with some tokens.

[citation needed]

It has just merely moved from "almost, but not entirely useless" to "sometimes useful". The models themselves may perhaps be capable already, but they will need much better tooling than what's available today to get more useful that that, and since it's AI enthusiasts who will happily let LLMs code them that work on these tools it will still take a while to get there :)


> It has just merely moved from "almost, but not entirely useless" to "sometimes useful"

[citation needed]

:P

This thing has changed the way I work. I barely touch my editor to actually edit anymore, because speaking into the chat field what changes I want it to make is more efficient

The tooling does need to get better, yes, but anecdotally, I do a fundamentally different job (more thinking, less typing, less sifting through docs, less wiring up) than 3 months ago

So much of my career was spent on especially rummaging in docs and googling and wiring things up. I believe that's the same for most of us


I'm optimistic about people being able to build the things they always wanted to build but either didn't have the skills or resources to hire somebody who did.

If we truly value human creativity, then things that decrease the rote mechanical aspects of the job are enablers, not impediments.


If we truly value human creativity we should stop building technology that decreases human value in the eyes of the rich and powerful

Or stop measuring ourselves by our reflection in their eyes.

Society can interpret sociopathy as damage and route around it, if we do the work to make it happen. It will not happen by itself without effort.


> What really matters is how well does the code performs 1 month after it goes live, 6 months, 5 years.

After 40 years in this industry—I started at 10 and hit 50 this year—I’ve developed a low tolerance for architectural decay.

Last night, I used Claude to spin up a website editor. My baseline for this project was a minimal JavaScript UI I’ve been running that clocks in at a lean 2.7KB (https://ponder.joeldare.com). It’s fast, it’s stable, and I understand every line. But for this session, I opted for Node and neglected to include my usual "zero-framework" constraint in the prompt.

The result is a functional, working piece of software that is also a total disaster. It’s a 48KB bundle with 5 direct dependencies—which exploded into 89 total dependencies. In a world where we prioritize "velocity" over maintenance, this is the status quo. For me, it’s unacceptable.

If a simple editor requires 89 third-party packages to exist, it won't survive the 5-year test. I'm going back to basics.

I'll try again but we NEED to expertly drive these tools, at least right now.


I don't understand. You specifically:

> neglected to include my usual "zero-framework" constraint in the prompt

And then your complaint is that it included a bunch of dependencies?

AI's do what you tell them. I don't understand how you conclude:

> If a simple editor requires 89 third-party packages to exist

It obviously doesn't. Why even bother complaining about an AI's default choices when it's so trivial to change them just by asking?


My main point is that we need to expertly drive these tools. I forgot the trivial instruction and ended up with something that more closely resembles modern software instead of what I personally value. AI still requires our expertise to guide it. I'm not sure if that will be the case in a year, but it is today.

You seem intelligent so it is probably confusing to many why you are posting this.

You call it a trivial instruction, but it is not trivial. It was a core requirement for your own design that you neglected to specify. This is not different than leaving out any other core requirement for a engineering specification.

Most people would NOT want this requirement. Meaning most people wouldn't care if there are package dependencies are not, so the agent 100% did the right thing.


I always tell Claude, choose your own stack but no node_modules.

What's missing is another LLM dialog between you and Claude. One that figures out your priorities, your non-functional requirements, and instructs Claude appropriately.

We'll get there.


> What's missing is another LLM dialog between you and Claude. One that figures out your priorities, your non-functional requirements, and instructs Claude appropriately.

There are already spec frameworks that do precisely this. I've been using BMAD for planning and speccing out something fairly elaborate, and it's been a blast.


Yes, I think this is reasonable.

I have been consistently skeptical of LLM coding but the latest batch of models seems to have crossed some threshold. Just like everyone, I've been reading lots of news about LLMs. A week ago I decided to give Claude a serious try - use it as the main tool for my current work, with a thought out context file, planning etc. The results are impressive, it took about four hours to do a non-trivial refactor I had wanted but would have needed a few days to complete myself. A simpler feature where I'd need an hour of mostly mechanical work got completed in ten minutes by Claude.

But, I was keeping a close eye on Claude's plan and gradual changes. On several occasions I corrected the model because it was going to do something too complicated, or neglected a corner case that might occur, or other such issues that need actual technical skill to spot.

Sure, now a PM whose only skills are PowerPoint and office politics can create a product demo, change the output formatting in a real program and so on. But the PM has no technical understanding and can't even prompt well, let alone guide the LLM as it makes a wrong choice.

Technical experts should be in as much demand as ever, once the delirious "nobody will need to touch code ever again gives way to a realistic understanding that LLMs, like every other tool, work much better in expert hands. The bigger question to me is how new experts are going to appear. If nobody's hiring junior devs because LLMs can do junior work faster and cheaper, how is anyone going to become an expert?


> I have been consistently skeptical of LLM coding but the latest batch of models seems to have crossed some threshold.

It’s refreshing to hear I’m not the only one who feels this way. I went from using almost none of my copilot quota to burning through half of it in 3 days after switching to sonnet 4.6. I’m about to have to start lobbying for more tokens or buy my own subscription because it’s just that much more useful now.


Yes, it's Sonnet 4.6 for me as well as the most impressive inflection point. I guess I find Anthropic's models to be the best, even before I found Sonnet 3.7 to be the only model that produced reasonable results, but now Sonnet 4.6 is genuinely useful. It seems to have resolved Claude's tendency to "fix" test failures by changing tests to expect the current output, it does a good job planning features, and I've been impressed by this model also telling me not to do things - like it would say, we can save 50 lines of code in this module but the resulting code would be much harder to read so it's better not to. Previous models in my experience all suffered from constantly wanting to make more changes, and more, and more.

I'm still not ready to sing praises about how awesome LLMs are, but after two years of incremental improvements since the first ChatGPT release, I feel these late-2025 models are the first substantial qualitative improvement.


^ Big this. If we take a pessimistic attitude, we're done for.

I think the key metric to good software has really changed, the bar has noticeably dropped.

I see unreliable software like openclaw explode in popularity while a Director of Alignment at Meta publicly shares how it shredded her inbox while continuing to use openclaw [1], because that's still good enough innit? I see much buggier releases from macOS & Windows. The biggest military in the world is insisting on getting rid of any existing safeguards and limitations on its AI use and is reportedly using Claude to pick bombing targets [2] in a bombing campaign that we know has made mistakes hitting hospitals [3] and a school [4]. AI-generated slop now floods social networks with high popularity and engagement.

It's a known effect that economies of scale lowers average quality but creates massive abundance. There never really was a fundamental quality bar to software or creative work, it just has to be barely better than not existing, and that bar is lower than you might imagine.

[1] https://x.com/summeryue0/status/2025774069124399363

[2] https://archive.ph/bDTxE

[3] https://www.reuters.com/world/middle-east/who-says-has-it-ha...

[4] https://www.nbcnews.com/world/iran/iran-school-strike-us-mil...


[flagged]


Is this a bot? I feel like HN is dying (for me at least) with all the em-dashes and the "it's not just X, it's Z".

This is correct. Had lunch with a senior staff engineer going for a promo to principal soon. He explained he was early to CC, became way more productive than his peers, and got the staff promo. Now he’s not sharing how he uses the agent so he maintains his lead over his peers.

This is so clearly a losing strategy. So clearly not even staff level performance let alone principal level.


Why the downvotes? It is the defining characteristic of the staff+ level to empower others. Individual contributions don’t matter at this level.

Hi Grok, nice comment!

> Any idiot can now prompt their way to the same software.

I must say I find this idea, and this wording, elitist in a negative way.

I don't see any fundamental problem with democratization of abilities and removal of gatekeeping.

Chances are, you were able to accumulate your expert knowledge only because:

- book writing and authorship was democratized away from the church and academia

- web content publication and production were democratized away from academia and corporations

- OSes/software/software libraries were all democratized away from corporations through open-source projects

- computer hardware was democratized away from corporations and universities

Each of the above must have cost some gatekeepers some revenue and opportunities. You were not really an idiot just because you benefited from any of them. Analogously, when someone else benefits at some cost to you, that doesn't make them an idiot either.


This is technically true in a lot of ways, but also intellectual and not identifying with what the comment was expressing. It's legitimately very frustrating to have something you enjoy democratized and feel like things are changing.

It would be like if you put in all this time to get fit and skilled on mountain bikes and there was a whole community of people, quiet nature, yada yada, and then suddenly they just changed the rules and anyone with a dirt bike could go on the same trails.

It's double damage for anyone who isn't close to retirement and built their career and invested time (i.e. opportunity cost) into something that might become a lot less valuable and then they are fearful for future economic issues.

I enjoy using LLMs and have stopped writing code, but I also don't pretend that change isn't painful.


The change is indeed painful to many of us, including me. I, too, am a software engineer. LLMs and vibe coding create some insecurity in my mind as well.

However, our personal emotions need not turn into disparaging others' use of the same skills for their satisfaction / welfare / security.

Additionally, our personal emotions need not color the objective analysis of a social phenomenon.

Those two principles are the rationales behind my reply.


I appreciate that rationale, I also see the importance of those two principles and I think there's a lot of value there.

I suppose I see "any idiot" as a more general phrase, like "idiot proof", not directly meaning that anyone who uses a LLM is an idiot. However I can also see how it would be seen as disparaging.

Also, while there's a lot of examples of people entrenching into a certain behavior or status and causing problems, I also think society is a bit harsh on people who struggle with change. For people who are less predisposed to be ok with change feels like a lot of the time the response is "just deal with it and don't be selfish, this new XYZ is better for society overall".

Society is pretty much made up of personal emotions on some level. I don't think we should go around attacking people, but very few things can be considered truly objective in the world of societal analysis.


> I don't see any fundamental problem with democratization of abilities and removal of gatekeeping.

This parroted argument is getting really tired. It signals either astroturfing or someone who just accepts what they are sold without thinking.

LLMs aren’t “democratising” anything. There’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.

You know what’s truly “democratic” and without “gatekeeping”? Exactly what we had before, an internet run by collaboration filled with free resources for anyone keen enough to learn.


Dismissing someone with a different opinion as astroturfing is not productive.

There are loads of high performance open source LLMs on the market that compete with the big 3. I have not seen this level of community engagement and collaboration since the open-source boom 20 years ago.


If I believed it was a different opinion I wouldn’t even have written the first paragraph, or maybe the whole reply.

The issue arises from it not being that person’s opinion but a talking point. People didn’t all individually arrive at this “democratisation” argument by themselves, they were sold what to say by the big players with vested interest in succeeding.

I’m very much for discussing thoughts one has come up with themselves, especially if they disagree with mine. But what is not productive is arguing with a proxy.

> I have not seen this level of community engagement and collaboration

Nor this level of spam and bad submissions.


> It signals either astroturfing or someone who just accepts what they are sold without thinking.

> Nor this level of spam and bad submissions.

Your comments seem pretty aggressive for what you’re replying to. Maybe take a beat to assess your biases? I thought the main comment was pretty fair and sensible, yet somehow you landed on calling them a spammer/bad submitter/astroturfer/non-thinker. Maybe they are? I could be wrong, but that's quite a strong reaction for what they asserted at face value. Not really trying to police anything here, I just thought the initial comment had merit and this devolved quite quickly.


You misunderstood. Spamming and bad submissions has nothing to do with the original comment.

You're overthinking it.

Programming is a tricky skill and takes a long time to get good at. Lots of people aren't good at it. AI helps them program anyway, and allows them to sometimes produce useful programs. That's it.

It's not a talking point. It's just the reality of what the technology enables, and it's a simple enough observation that millions of people can independently arrive at that conclusion, and some of them might even refer to it as "democratization".


> Programming is a tricky skill and takes a long time to get good at. Lots of people aren't good at it.

This is a good thing. It's a filter for the careless, lazy, and incompetent. LLMs are to programming what a microwave is to food. I'm not a chef because I can nuke a hot pocket. "Vibe coders" (not AI-assisted coding) are the programming equivalent of the people on Kitchen Nightmares. Go figure, it's a community rife with narcissism, too.


It is a fair note when there are a lot of people with a monetary incentive to hype up a certain piece of technology. And as gp correctly points out: "democratizing" is most commonly used in a very hostile and underhanded manner.

It is what we are talking about, hence not "counterproductive".


> LLMs aren’t “democratising” anything.

They absolutely are. Anytime new knowledge or skills become widely available to everyone, that's a term used for it.

> There’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.

None of that has anything to do with anything. There's competition between companies to keep prices low and accessibility high.

I think you are simply misunderstanding the word "democratic". It isn't just political. From MW:

> 3 : relating, appealing, or available to the broad masses of the people : designed for or liked by most people

Here, it's specifically about making things available to the broad masses of the people that wasn't before.

This isn't a matter of opinion. It's just the meaning of the word.


Those things were already available. That’s the point. What do you think LLMs are trained on?

> here’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.

That would not happen, simply because those companies' interest will never be aligned entirely. There are at least three SOA models at the moment plus many open weight models. Anthropic vs. Pentagon is exactly what would play out.

And what is a precedence? Don't say Google, because search is well and alive.

> You know what’s truly “democratic” and without “gatekeeping”? Exactly what we had before, an internet run by collaboration filled with free resources for anyone keen enough to learn.

We have way more free resources at the moment. Name anything you'd like to learn, someone will be able to point you to a relevant resource. There are also better ways of surfacing that resource.

> This parroted argument

Most of arguments here on HN have been discussed ad nauseam, for or against AI. It's only parroted (or biased) if it's against your own beliefs.


I agree completely, the "democratizing programming" is being overplayed by AI vendors like they are doing community service, and HN commenters use it like a trump card in an argument.

Everyone already had the option to write any code, fork any open source project, publish any of their code, run any of their code but suddenly AI appears and THAT is what makes it democratic? What was undemocratic about it? Is this democracy where idiots are running ai agents that publish smear campaigns, or harass maintainers for not accepting their slop is the democratic future you wish for?

How many (job) positions do you see today that want a backend developer? Frontend developer? Not much because now everyone is expected to be at least full stack, if not also devops as well. The exact same thing is playing out right now with AI, people are expected to produce 5x the amount of code before, if you don't, someone else will take your job that is willing to do it.

Already bloated programs will bloat further, they will require even more resources to run, you will have to pay even more for hardware, they will be slower, less responsive, you will have to pay yet another monthly fee to big tech for their AIs, and people will happily do it and pat themselves that we democratized programming, while running towards the future where nobody will be able to own hardware capable of general computing.


> ...I haven't yet tried the big local ones, because how would that be better? I'm still paying to big tech to run it, just in a different way

Why blame big tech when they're just providing a service at a fair cost (3rd party inference is incredibly cheap)? I'm not sure how that makes sense.


I removed this line because people will get hung up on it and not see the forest from the tree.

> There’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.

LOL. Maybe you are referring to OpenAI and Anthropic? Yes they have codex and opus. But about 1-2 months behind them is Grok, Gemini, and then 2-3 months behind them are all the other models available in cursor, from chinese open source models to composer etc.

How you can possibly use this "big company takes everything away" narrative is ridiculous, when you can probably use models for free that are abour 2 months behind the best models. This is probably the most uncentralised tech boom ever.

(I mean openAI is in such a bad state, I wouldn't be surprised if they lose almost their entire lead and user base within 6-12 months and are basically at the level of small chinese llm developers).


> I don't see any fundamental problem with democratization of abilities and removal of gatekeeping.

It was very democratized before, almost anyone could pick up a book or learn these skills on the internet.

Opportunity was democratized for a very long time, all that was needed was the desire to put in the work.

OP sounds frustrated but at the same time the societal promise that was working for longest time (spend personal time specializing and be rewarded) has been broken so I can understand that frustration..


I'm mad about Ozempic. For years I toiled, eating healthy foods while other people stuffed their faces with pizza and cheese burgers. Everybody had the opportunity to be thin like me, but they didn't take that and earn it like me. So now instead of being happy about their new good fortune and salvaged health, I'm bitter and think society has somehow betrayed me and canceled promises.

/s, obviously I would hope except I've actually seen this sentiment expressed seriously.


I would rather see regulations fixing incentives that create this problem (why does healthy food cost so much more than processed food?) than a bandaid like Ozempic that 2/3 of people can't quit (hello another hidden subscription service) without regaining their weight back.

The produce aisle has the cheapest food in the whole store. Inb4 you cite the price of some fancy imported vegetable as your excuse for eating pizza every night.

I can only speak from my own experience but if you want to have a healthy diet (enough protein and calories) where I'm from it costs a lot more than just buying cheap junk food. Well, the proteins cost.

People are obese because they eat at restaurants, eat junk food, and drink sugary or high carb liquids.

They are not obese because they cannot afford the necessary amounts of protein and calories from healthy sources in the grocery store.


You have a point and I agree now that my point on things being expensive was the wrong one. The problem is that junk food is so much easier to get than healthy food.

Healthy food costs time if you want tasty food.

If you train yourself to expect the highs of unhealthy food with excess carbs, sat fats, and salt, which is what restaurants, junk food, and high carb liquids have, then no healthy food is going to be tasty enough.

If you eschew those highs and settle for some sprouted moong bean salad with a little bit of salt/lime/black pepper, or hummus and veggies, or eggs with some smashed avocado on toast, tofu and some broccoli, etc, then it does not cost much time.

There is no baking involved, just cutting, mixing, blending, and maybe soaking. Sautéing or pan frying in a little bit of olive oil or canola oil is also quick.


And no lol, I eat very healthy and mainly cook my own vegetarian food, had junk food last time maybe a month ago.

> why does healthy food cost so much more than processed food?

It doesn’t.


> why does healthy food cost so much more than processed food?

It doesn't. Carbs like rice, potatoes, etc. are incredibly cheap. Protein like ground beef and basic cuts of chicken are not expensive. And broccoli, carrots, green peppers, apples -- these are not exactly breaking the bank. Product is seasonal, so you vary what you buy according to what is cheapest this week.

Meanwhile, stuff like breakfast cereal and potato chips and Oreo cookies actually are surprisingly expensive.


> Carbs like rice, potatoes, etc. are incredibly cheap.

Eating too many carbs is not a healthy diet dude


It's the regulations and subsidies that created the very situation in the first place (in the USA, at least). Twinkies are cheap because we literally pay farmers to grow cheap carbs and sugar. It was design this way, well - lobbied.

I can believe that unfortunately. Good regulation is hard to do without lobbyists getting what they want at the expense of people.

> why does healthy food cost so much more than processed food?

It does not. Legumes, whole grains, vegetables, and yogurt have always been cheaper than processed food.

People prefer eating carbohydrates and saturated fats.


is the result the only thing that matters? or does the journey have its place as well?

is there price to be paid for getting any desired result imaginable without effort on a press of a button?


Yeah, exactly. For the longest time those of us who were self taught and/or started late were looked down upon. Before that, same with corporate vs. open source. This is the same elitist and gatekeeping mentality. If LLM coding tools help people finally get ideas out of their head, then more power to them! If others want to yak shave to and do more serious intellectual type of programming and exploration, more power to them!

It goes past software though. That's just the common ground we share on here. A lifetime ago I was a souhd engineer, and knew how to mic up a rock band. I've since forgotten it all, but I was at a buddies practice space and the opportunity came up to mic their setup. so I dredged up decades old memories, only to take a photo and sent it to ChatGPT, which has read every book on sound engineering and mic placement, every web forum that was open to the public where someone dropped some knowledge out there on the Internet for free. So, damned if it didn't come up with some good suggestions! I wish I could say it only made wrong and stupid suggestions. A lot about mic placement is subjective, but in telling it the kind of sound we were after, it was able to tell us which direction to go to get warmer or harsher.

So it's not just software that's coming to an end, everything else is as well. But; billionaires wives will still need haircuts (women billionaires will also need haircuts), so hairdresser will be the last profession.


I remember the cosmetology department on the other side of the tech school I went to was a common target of mockery on the "tech" tech side. Life as a hairdresser isn't always easy, but it's real skill. And unlike computer touching, requires certification.

> "removal of gatekeeping"

Gates were put in place for lawyers, doctors, and engineers (real ones, not software "engineers") because the cost of their negligence and malpractice was ruined lives and death. Gatekeeping has value.

Software quality, reliability, and security was already lousy before the advent of LLMs, making it increasingly clear that the gate needed to be kept. Gripes about "gatekeeping" are a dogwhistle for "I would personally benefit from the bar being lowered even further".


The argument for lawyers, doctors and "real" engineers seems like a strawman here.

This discussion is specifically about lowering the barriers of programming and creating using software.

I haven't said anything at all about other professions nor do I think my arguments for democratizing software creation apply to law, medicine, or "real" engineering.

There's also a false equivalence in the software part of your comment. It equates lowering of barriers for recreational/hobby coding with software engineering for serious purposes.

Since you dismiss me as a dogwhistle, I hope my terming your argument as elitist, strawmanish, and full of false equivalences is only seen as fair.


So you put these all in the same category: gaining knowledge, gaining abilities, and just obtaining things.

I gatekeep my bike, I keep it behind a gate. If you break the gate open and democratize my bike, you're an idiot.


I'm not sure how you're getting that from their post? None of the four things mentioned (book publishing, web publishing, open-source software, computer hardware) involve stealing someone's property, he's saying that the ability to produce those things widened and the cost went down massively, so more people were able to gain access to them. Nobody stole your bike, but the bike patents expired and a bunch of bike factories popped up, so now everyone can get a cheap bike.

I did have misgivings about saying that because I'm from the old "information wants to be free" school. But the subject was idiocy, and the point isn't to say that the bike was stolen, but that the bike-taker didn't do anything clever, or have much of a learning experience.

Maybe it's of value that any idiot can do this, but we're still idiots.


it is more like:

You gatekeep your bike, you keep it behind a gate, you don't let anyone else ride it.

Your neighbor got a nicer bike for Christmas, rode it by your house and now you are sad because you aren't the special kid with the bike any more, you are just regular kid like your neighbor.


No, both bikes are owned by a $trillion corporation who collects a monthly rent.

Yeah, if you studied and mastered all of the various disciplines required for fabricating a bicycle, and then fabricated your own by hand and offered to do likewise for others, sometimes in exchange for compensation, sometimes for free (provided others could use the bike), only for some machine that mass produces bikes to (informal) spec that was built by studying all of the designs you used for the bikes you made to suddenly become widely and cheaply available.

Yeah, and that machine that replaces you was built by engineers.

We can dish it out, but we can’t take it.


Jesus that's brutal. Accurate. But I feel attacked ;p

Using physical analogs for virtual things is not the best choice, for example: Would you give a copy of your bike, or copy of your food to your poor neighbor kid if you could copy it as easily and as cheaply as digital products?

Actually he would be very wise, for he then has a bike and can ride it or sell it for money. You have to learn capitalist thinking to succeed in this economy.

While I can see your point I also think it is not directly relevant to OP. Firstly, I don't think OP meant that people are idiots for using LLM's, it was just a way of saying that skill is no longer required so even idiots can do it whereas it used to be something that required high skill.

As for the comparisons - some are partly comparable to the current situation, but there's some differences as well. Sure books and online content enabled others to join, thereby reducing the "moat" for those who built careers on esoteric knowledge. But it didn't make things _that_ easy - it still required years of invested time to become a good developer. Also, it happened very gradually and while the developer pie was growing, and the range of tech growing, so developers who kept on top of technology (like OP did) could still be valuable. Of course, no one knows fully how it will play out this time around; maybe the pie will get even bigger, maybe there's still room for lots of developers and the only difference is that the tedious work is done. Sure, then it is comparable. But let's be honest, this has a very real chance of being different (humans inventing AI surely is something special!) and could result in skill-sets collapsing in value at record time. And perhaps worse, without opening new doors. Sure, new types of jobs may appear but they may be so different that they are essentially completely different careers. It is not like in the past you just needed to learn a new programming language.


The real litmus test is whether one would allow LLMs to determine a medical procedure without human check. As of 2026, I wouldn’t. In the same sense I prefer to work with engineers with tons of experience rather than fresh graduates using LLMs

People actually value the effort and dedication required to master a craft. Imagine we invent a drug that allows everyone to achieve olympic level athletic performance, would you say that it "democratises" sports? No, that would be ridiculous.

It does technically democratize the exhilarating experiences of that level of performance. Likely also democratizes negative aspects like injuries, extreme dieting, jealousy, neglecting relationships.

That said, if we zoom out and review such paradigm shifts over history, we find that they usually result in some new social contracts and value systems.

Both good expert writers and poor novice writers have been able to publish non-fiction books from a few centuries now. But society still doesn't perceive them as the same at all. A value system is still prevalent and estimated primarily from the writing itself. This is regardless of any other qualifications/disqualifications of authors based on education / experience / nationality / profession etc.

At the individual level too, just because book publishing is easy doesn't mean most people want to spend their time doing that. After some initial excitement, people will go do whatever are their main interests. Some may integrate these democratized skills into their main interests.

In my opinion, this historical pattern will turn out to be true with the superdrug as well as vibe coding.

Some new value will be seen in the swimming or running itself - maybe technique or additional training over and above the drug's benefits.

Some new value will be discovered in the code itself - maybe conceptual clarity, algorithmic novelty, structural cleanliness, readability, succinctness, etc. Those values will become the new foundations for future gatekeeping.


>Some new value will be discovered in the code itself - maybe conceptual clarity, algorithmic novelty, structural cleanliness, readability, succinctness, etc. Those values will become the new foundations for future gatekeeping.

It's a nice idea, but I feel like that's only going to be the case for very small companies or open source projects. Or places that pride themselves on not using AI. Artisan code I call it.

At my company the prevailing thought is that code will only be written by AI in the future. Even if today that's not the case, they feel it's inevitable. I'm skeptical of this given the performance of AI currently. But their main point is, if the code solves the business requirements, passes tests and performs at an adequate level, it's as good as any hand written code. So the value of readable, succinct, novel code is completely lost on them. And I fear this will be the case all over the tech sector.

I'm hopeful for a bit of an anti-AI movement where people do value human created things more than AI created things. I'll never buy AI art, music, TV or film.


The exhilarating experience is a byproduct of the effort it took to obtain. Replace drug with exoskeleton or machine, my point is the same. The way you democratise stuff like this is removing barriers to skill development so that everyone can learn a craft, skill, train their bodies etc.

But I do agree, if everyone can build software then the allure of it along with the value will be lost. Vibe coding is only a superpower as long as you're one of the select few doing it. Although I imagine it will continue to become a niche thing, anyone who thinks everyone and their grandma will be vibing bespoke software is out to lunch.

Personally I think there is a certain je ne sais quoi about creating software that cannot be distilled to some mechanical construct, in the same way it exists for art, music, etc. So beyond assembly line programming, there will always be a human involved in the loop and that will be a differentiating factor.


It would democratize sports, while making sports worthless and unremarkable. It would collapse the market for sports.

> would you say that it "democratises" sports

Given how I've seen a lot of AI "artists" describe themselves and "their" works, yeah, probably a lot of them would.


Coding is one of the least gate kept things in history. Literally the only obstacle is "do I want to put in the time to learn it". All Claude is doing is remixing all the free stuff that was already a google search away.

I suggest trying to find out how the things you're taking for granted - time, literacy, books - remain barriers for many people in many parts of the world.

LOL, you sound like people "democratizing" finance with crypto.

I'm sure that's why they're investing 1 trillion in AI, for the poor illiterates.


Elitism is good. Elitism is just. There is absolutely nothing wrong with elitism.

Skill based one of course.


Democratizing? A handful of companies harvesting data and building products on top of it is democratizing?

Open research papers, that everyone can access is democratizing knowledge. Accessibile worldwide courses, maybe (like open universities).

But LLMs are not quite the sane. This is taking knowledge from everyone and, in the best case, paywalling it.

I agree in spirit that the original comment was classist, but in this context your statements are also out of place, in my opinion.


This is a good response. Progress has always been resisted by incumbents

Exactly. How ridiculous. The world doesn’t owe ‘principal engineers’ shit. I hate to work with people like this.

—- from a ‘principal engineer’


how is 2-3 centralized providers of this new technology "democratization"?

It's _relatively_ democratic when compared to these counterfactual gatekeeping scenarios:

- What if these centralized providers had restricted their LLMs to a small set of corporations / nations / qualified individuals?

- What if Google that invented the core transformer architecture had kept the research paper to themselves instead of openly publishing it?

- What if the universities / corporations, who had worked on concepts like the attention mechanism so essential for Google's paper, had instead gatekept it to themselves?

- What if the base models, recipes, datasets, and frameworks for training our own LLMs had never been open-sourced and published by Meta/Alibaba/DeepSeek/Mistral/many more?


> - What if Google that invented the core transformer architecture had kept the research paper to themselves instead of openly publishing it?

I'm pretty sure that someone else would have come around the corner with a similar idea some time later, because the fundamentals of these stuff were already discussed decases before "Attention is all you need" paper, the novel thing they did was combining existing knowhow into a new idea and making it public. A couple of ingredients of the base research for this is decades old (interestingly back then some European universities were leading the field)


> I'm pretty sure that someone else would have come around the corner with a similar idea some time later, because the fundamentals of these stuff were already discussed decases before

I am not trying to be dismissive, but this could apply to all research ever


thats true! I meant "not somewhen accidentally in the future" but more of "relative close together on the timeline"

There are lots of open weight models

You're right! And cars when they were invented didn't give increased mobility to millions of people, because they came from just a few manufacturers.

Cell phones made communication easier for exactly zero people even though billions have been sold. Why? Because they come from just a few different companies.


Cars are a great analogy because they made mobility significantly worse for people who can’t afford them or refuse to use them for ethical reasons.

Those people are the exception that proves the rule.

And cars now have become privacy nightmares, which we are now beholden to

I will tell that to the billions of people who are walking and biking around at this very moment. Just give me some time.

I said worse, not impossible. Because of cars, I have to take longer routes and put myself into danger while working.

cars, up to relatively recently, have been pure hardware machines that when a consumer buys, they can own. Now that's starting to change. Let's see how "democratized" cars are when the manufacturers can hard-lock "owners" from them.

Similar story to cell phones.

LLMs are in this state right out the gate.


> elitist in a negative way.

It's funny you say that, because I've seen plenty of the reverse elitism from "AI bros" on HN, saying things like:

> Now that I no longer write code, I can focus on the engineering

or

> In my experience, it's the mediocre developers that are more attached to the physical act of writing code, instead of focusing on the engineering

As if getting further and further away from the instructions that the CPU or GPU actually execute is more, not less, a form of engineering, instead of something else, maybe respectable in its own way, but still different, like architecture.

It's akin to someone claiming that they're not only still a legitimate novelist for using ChatGPT or a legitimate illustrator for using stable diffusion, but that delegating the actual details of the arrangement of words into sentences or layers and shapes of pigment in an image, actually makes them more of a novelist or artist, than those who don't.


Yes, both are forms of elitism.

Yeah, and one is at least plausibly justifiable (though still potentially unfounded), while the other is absurd on its face.

> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.

I've been a tech lead for years and have written business critical code many times. I don't ever want to go back to writing code. I am feeling supremely empowered to go 100x faster. My contribution is still judgement, taste, architecture, etc. And the models will keep getting better. And as a result, I'll want to (and be able to) do even more.

I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.

Any "idiot" can build their own software tailored to how their brains think, without having to assemble gobs of money to hire expensive software people. Most of them were never going to hire a programmer anyway. Those ideas would've died in their heads.


> I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.

Programming was already “democratized” in the sense that anyone could learn to program for free, using only open-source software. Making everyone reliant on a few evil megacorporations is the opposite of democratization.


You know what they mean by that term, it's about building things without needing to put in the learning effort. I have bosses building small POCs via vibe coding, something they would not have done via learning to code and typing it manually.

It's the same sort of argument artists use when it comes to AI generated media, there obviously is a qualitative difference in the people now able to generate whatever they want versus needing to draw something by hand, so saying "they could've just learned to draw themselves" is not very convincing. People don't want to do that yet still get an output, and I see nothing wrong with that, and if you do, it's just another sort of gatekeeping, that the "proper" way is to learn it by hand.

Lastly, many, many open weight models exist.


> I have bosses building small POCs via vibe coding

They are not building anything; they are paying Anthropic to do it.

> Lastly, many, many open weight models exist.

Good luck running them when the megacorporations have bought out all hardware.


Okay, they are paying Anthropic, I don't really care about the semantics when at the end of the day they get an output they didn't get before.

Open weight models run on consumer hardware, if you think corporations will make every single piece of hardware unaffordable then I'm not sure what to tell you.


What you bring to the table night be fine, but how long do you think you'll find emoloyers willing to still pay for this?

One thing is for sure LLMs will bring down down the cost of software per some unit and increase the volume.

But..cost = revenue. What is a cost to one party is a revenue to another party. The revenue is what pays salaries.

So when software costs go down the revenues will go down too. When revenues go down lay offs will happen, salary cuts will happen.

This is not fictional. Markets already reacted to this and many software service companies took a hit.


If AI completely erases the profession of software developer, I'll find something else to do. Like I can't in good faith ever oppose a technology just because it's going to make my job redundant, that would be insane.

Take that to its extreme. Suppose there was a technology that you do not own that would make everyone's job redundant. Everyone out of a job. There is no need for education, for skills to be mastered, for expertise. Would it still be insane to complain?

Then society needs to collectively decide how to allocate resources. Uh oh!

The owners of the AI companies will collectively decide how to allocate resources, rather.

the resources go to the guys with the AI duh

Isnt' that what old-school software did for many years? It used to take jobs, just not from developers. If you implement software that takes accounting from 10 people to 2, 8 just got fired. If you have Support solution helping one support rep answer 100 requests instead of 20, you just optimised support force by the rate of 1 to 5.

I'm in the boat of SaaS myself, but feel a bit dishonesty from Senior devs complaining about technology stealing jobs. When it was them doing the stealing, it was fine. Now that the tables have turned, it's not technology is bad


Jevons' paradox still exists. Making X cheaper (usually by needing fewer people to do one unit of X) can and often does lead to more people being needed for X.

You still need education, skills to be mastered and expertise even in a world without jobs. How would you play any game or sport without skills?

A world where there is no need for work? Oh no, my steak is too juicy and my lobster is too buttery.

A world where there is no need for workers--not at all the same thing.

You may not end up with a seat at the table.


There are bigger issues if everyone is out of a job.

take that to absolute extreme. Why do we even need a job? If all our physical needs are met maybe humanity can finally focus on real problems (spiritual, mental, inter personal) that no amount of "jobs" can solve...

Because greedy capitalists control the world which means that most people's most basic needs aren't met if they don't have a job.

I believe that if situation gets that bad, then we will actually do some new kind of revolution, even in the West.

There may not be a job for you in an office setting. What would you do?

That's when the problem shifts from individual to systemic, and only systemic solutions fix systemic problems.

I think that a what a lot of anti-AI folks are trying to argue without saying it explicitly is that it already is a systemic problem. They're not necessarily against the technology on its own, but against the systemic problems it would introduce if society doesn't take a stance against it.

I'd buy some good gloves and steel-toed boots.

I don't have an answer for this, and won't pretend to.

But my take on this is that accountability will still be a purely human factor. It still is. I recently let go of a contractor who was hired to run our projects as a Scrum/PM, and his tickets were so bad (there were tickets with 3 words in them, one ticket was in the current sprint, that was blocked by a ticket deep in the backlog, basic stuff). When I confronted him about them, he said the AI generated them.

So I told him that:

1. That's not an excuse, his job is to verify what it generated and ensure it's still good.

2. That actually makes it look WORSE, that not only did he do nearly 0 work, that he didn't even check the most basic outputs. And I'm not anti-AI, I expressly said that we should absolutely use AI tools to accelerate our work. But that's not what happened here.

So you won't get to say (at least I think for another few years) "my AI was at fault" – you are ultimately responsible, not your tools. So people will still want to delegate those things down the chain. But ultimately they'll have to delegate to fewer people.


In general I agree. But it’s somehow very unlikely for the AI to generate a three word ticket. That’s what humans do. AI might generate an overly verbose and specific ticket instead.

What drives that behavior is what I like to call human slop :)

>What you bring to the table night be fine, but how long do you think you'll find emoloyers willing to still pay for this?

I'm assuming that the software factory of the future is going to need Millwrights https://en.wikipedia.org/wiki/Millwright

But, builders are builders. These tools turn ideas into things, a builders dream.


Just sold a house/moved out after being laid off in mid-January from a govt IT contractor(there for 8 great years and mostly remote). I started my UX Research, Design and Front End Web Design coding career in 2009, but now I think it's almost a stupid go nowhere vanishing career, thanks to AI.

I think much like you that AI is and will just continue to destroy the economy! At least I got to sell a house and make a profit--stash it away for when the big AI market crash happens (hopefully not a 2030 great depression tho). As then it's a down market and buying stocks, bitcoin and houses is always cheaper.


Any given system will still need people around to steer the AI and ensure the thing gets built and maintained responsibly. I'm working on a small team of in-house devs at a financial company, and not worried about my future at all. As an IC I'm providing more value than ever, and the backlog of potential projects is still basically endless- why would anyone want to fire me?

Why would it need people to steer the AI? I can easily see a future where companies that don't rely on the physical world (like manufacturing) are completely autonomous, just machines making money for their owner.

It's easy to imagine but there's still a vast amount of innovation and development that has to happen before something like that becomes realistic. At that point the whole system of capitalism would need to be reconsidered. Not going to happen in the foreseeable future.

Yours is a naive sight. You learn a bit about engineering and feedback control and realize that the world is too complex for that.

> why would anyone want to fire me?

Because they can hire some "prompt engineer" to "steer the AI" for $30-50k instead of $150-$250k.


The difference between having a non-technical person and someone who is capable of understanding the code being generated and the systems running it is immense, and will continue to be so over the foreseeable future.

Just because somebody has a bunch of power tools doesn't mean I'd ask them to build my house.

Anyone that only costs $30k-50k would either be doing this part-time, or have some limit that prevented them from earning $150k-250k.

Or not living in US?

"One thing is for sure LLMs will bring down down the cost of software per some unit and increase the volume.

But..cost = revenue."

That is Karl Marx's Labor theory of value that has been completely disproven.

You don't charge what it costs to build something, you charge the maximum the customer is willing to pay.


The price is determined by SUPPLY and demand, and being able to write software quickly using LLMS would move the supply curve.

Congrats - you caused me to create an account to reply, due to the sheer density of your incorrectness.

- First, the LTV was not Marx's idea. Adam Smith held the same view, as did many many others during this era. Marx refined this idea, but there's nothing about your point that is unique to his version of it.

- Second, while LTV is not widely used today, this is not because it was "completely disproven" (can you cite anything to back this claim up?). It is because economics shifted to a different paradigm based on marginal utility. These two frameworks operate at different levels of abstraction and address different aspects of the price of goods. There is actually empirical evidence of a correlation between the cost of a good and the cost of the labour, at an aggregate level.

- Third, Marx explicitly differentiated between _value_ and _price_. LTV deals with value exclusively (in other words, what happens when externalities impacting price are accounted for). He would have had no issue accepting that externalities impacting supply and demand would impact price.

The final irony of your comment is that the commenter's claim that you are incorrectly analysing is actually also fully defensible under your (presumably) neoclassical view of economics. In competitive markets, reduced production costs lead to reduced equilibrium prices as competitors undercut each other. The proposition that in the long run, under competition, price tends toward cost is a standard result in microeconomics. The idea that "you charge the maximum the customer is willing to pay" only holds without qualification in monopoly or monopolistic competition with strong differentiation, which are precisely the conditions that increased software supply would erode.


Efficient markets barely exist anywhere, especially in tech, it's all monopolistic competition that's bad for the consumer and increases inequality.

> I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.

Here's the other edge of that sword. A couple back-end devs in my department vibe-coded up a standard AI-tailwind front-end of their vision of revamping our entire platform at once, which is completely at odds with the modular approach that most of the team wants to take, and would involve building out a whole system based around one concrete app and 4 vaporware future maybe apps.

And of course the higher-ups are like “But this is halfway done! With AI we can build things in 2 weeks that used to six months! Let’s just build everything now!” Nevermind that we don’t even have the requirements now, and nailing those down is the hardest part of the whole project. But the higher-ups never live through that grind.


This scenario is not new with AI at all though? 14 years ago I watched a group of 3 front-end devs spin up a proof of concept in ember.js that has a flashy front end, all fake data, and demo it to execs. They wowed the execs and every time the execs asked "how long would it take to fix (blank) to actually show (blank)?" the devs hit f12, inspect element, and typed in what they asked for and said "already done!".

It was missing years of backend and had maybe 1/20th feature parity with what we already had and it would have, in hindsight, been literally impossible to implement some of the things we would need in the future if we had went down that path. But they were amazed by this flashy new thing that devs made in a weekend that looked great but was actually a disaster.

I fail to see how this is any different than what people are complaining about with vibe coded LLM stuff a decade and a half later now? This was always being done and will continue to be done; it's not a new problem.


The difference is now anyone can spin up a vibe-coded site that wows execs.

It reemphasizes the question of importance. Would a user accept their data needing a AI implementation of a ("manual") migration and their flow completely changing? Does reliability to existing users even matter in the companies plans?

If it isn't a product that needs to solve problems reliably over time then it was kind of silly to use a DBA that cost twice the Backend engineer and only handled the data niche. We progressed from there or regressed from there depending on why we are developing software.


The models will not keep betting better. We have pased "peak LLM" already, by my estimate. Some of the parlour tricks that are wrapped around the models will make some incremental improvements, but the underlying models are done. More data, more parameters, are no longer doing to do anything.

AI will have to take a different direction.


This is really interesting to me; I have the opposite belief.

My worry is that any idiot can prompt themselves to _bad_ software, and the differentiator is in having the right experience to prompt to _good_ software (which I believe is also possible!). As a very seasoned engineer, I don't feel personally rugpulled by LLM generated code in any way; I feel that it's a huge force multiplier for me.

Where my concern about LLM generated software comes in is much more existential: how do we train people who know the difference between bad software and good software in the future? What I've seen is a pattern where experienced engineers are excellent at steering AI to make themselves multiples more effective, and junior engineers are replacing their previous sloppy output with ten times their previous sloppy output.

For short-sighted management, this is all desirable since the sloppy output looks nice in the short term, and overall, many organizations strategically think they are pointed in the right direction doing this and are happy to downsize blaming "AI." And, for places where this never really mattered (like "make my small business landing page,") this is an complete upheaval, without a doubt.

My concern is basically: what will we do long term to get people from one end to another without the organic learning process that comes from having sloppy output curated and improved with a human touch by more senior engineers, and without an economic structure which allows "junior" engineers to subsidize themselves with low-end work while they learn? I worry greatly that in 5-10 years many organizations will end up with 10x larger balls of "legacy" garbage and 10x fewer knowledgeable people to fix it. For an experienced engineer I actually think this is a great career outlook and I can't understand the rug pull take at all; I think that today's strong and experienced engineer will be command a high amount of money and prestige in five years as the bottom drops out of software. From a "global outcomes" perspective this seems terrible, though, and I'm not quite sure what the solution is.


>For short-sighted management, this is all desirable since the sloppy output looks nice in the short term

It was a sobering moment for me when I sat down to look at the places I have worked for over my career of 20-odd years. The correlation between high quality code and economic performance was not just non-existing, it was almost negative. As in: whenever I have worked at a place where engineering felt like a true priority, tech debt was well managed, principles followed, that place was not making any money.

I am not saying that this is a general rule, of course there are many places that perform well and have solid engineering. But what I am saying is that this short-sighted management might not be acting as irrationally as we prefer to think.


I generally agree; for most organizations the product is the value and as long as the product gives some semblance of functionality, improving along any technical axis is a cost. Organizations that spend too much on engineering principles usually aren’t as successful since the investment just isn’t worth it.

But, I have definitely seen failure due to persistent technical mistakes, as well, especially when combined with human factors. There’s a particularly deep spiral that comes from “our technical leadership made poor choices or left, we don’t know what to invest in strategically so we keep spending money on attempted refactors, reorgs, or rewrites that don’t add more value, and now nobody can fix or maintain the core product and customers are noticing;” I think that at least two companies I’ve worked at have had this spiral materially affect their stock price.

I think that generative coding can both help and hurt along this axis, but by and large I have not seen LLMs be promising at this kind of executive function (ie - “our aging codebase is getting hard to maintain, what do we need to do to ensure that it doesn’t erode our ability to compete”).


My guesses are

1. We'll train the LLMs not to make sloppy code.

2. We'll come up with better techinques to make guardrails to help

Making up examples:

* right now, lots of people code with no tests. LLMs do better with tests. So, training LLMs to make new and better tests.

* right now, many things are left untested because it's work to build the infrastructure to test them. Now we have LLMs to help us build that infrustructure so we can use it make better tests for LLMs.

* ...?


* better languages and formal verification. If an LLM codes in Rust, there’s a class of bugs that just can’t happen. I imagine we can develop languages with built-in guardrails that would’ve been too tedious for humans to use.

ChatGPT came out a little over 3 years ago. After 5-10 more years of similar progress I doubt any humans will be required to clean up the messes created by today’s agents.

Good software, bad software, and working software.

> Any idiot can now prompt their way to the same software.

No, it can't. I use claude code and AMP a lot, and yet, unless I pay attention, it easily generate bad code, introduces regressions while trying to fix bugs, get stuck in suboptimal ideas. Modularity is usually terrible, 50 year ideas like cohesion and coupling are, by the very nature of it, mostly ignored except in the most formal rigid ways of mimicry introduced by post-training.

Coding agents are wonderful tools, but people who think they can create and mantain complex systems by themselves are not using them in an optmal way. They are being lazy, or they lack software engineering knowledge and can't see the issues, and in that case they should be using the time saved by coding agents to read hard stuff and elevate their technique.


> Any idiot can now prompt their way to the same software.

It may look the same, but it isn't the same.

In fact if you took the time to truly learn how to do pure agentic coding (not vibe coding) you would realize as a principal engineer you have an advantage over engineers with less experience.

The more war stories, the more generalist experience, the more you can help shape the llm to make really good code and while retaining control of every line.

This is an unprecedented opportunity for experienced devs to use their hard won experience to level themselves up to the equivalence of a full team of google devs.


> while retaining control of every line

What I want when I'm coding, especially on open source side projects, is to retain copyright licensing over every line (cleanly, without lying about anything).

Whoops!


Hmm. TIL: The real exposure isn't Anthropic, OpenIA claiming your code, it's you unknowingly distributing someone else's GPL code because the model silently reproduced it, with essentially zero recourse for the model owner.

It depends on your plan, but Google[1] and Anthropic[2] at least provide indemnity against this. Haven't checked the others. Still not a situation you want to find yourself in, though.

[1] https://cloud.google.com/blog/products/ai-machine-learning/p...

[2] https://www.anthropic.com/news/expanded-legal-protections-ap...


I wonder why people still believe in intellectual property, it's a concept that has long since lived past its usefulness, especially technologically.

A free license, like the BSD, if followed, ensures that the unpaid creator of a free work is at least credited. Everyone using that work at the source code level sees the copyright notice with that author's name. The author has already given everyone the freedom to do anything with the code, except for plagiarism. AI is taking away the last thing from peoiple who have shared everything else.

Why is plagiarism an issue? In school it's an issue due to the effect that students won't learn well if they just copy everything, but outside of school and especially for personal use, why should I care if I "plagiarize" or not (and arguably AI doesn't even plagiarize as it's not a 1 to 1 copy paste of the code when making a new project)? The concept of plagiarism is as much a fiction as "intellectual" property. The only sort of property that actually exists is real and tangible.

For morons who have never created anything and just steal theft is not an issue. Of course.

Creators who are ripped off care. IP is more logical that land ownership, since new things have been created whereas no one created the land. Land is just stolen and defended.


Ah yes, "morons." You don't need to make a new account just to reply to a comment you dislike and know you will soon get flagged anyway.

> Why is plagiarism an issue?

For starters, because of the western values of giving credit.

We have diseases named after people, never mind inventions and ideas.

Plagiarism is kick-out-of-school grade academic misconduct, whereby you are pretending that someone's work (and the ability it implies) is your own.

> The only sort of property that actually exists is real and tangible.

Remember, I'm talking about works that are free to redistribute, use and even modify. Or in other cases, that the users to whom a compiled work is distributed have access to the buildable source code.

The authors put their names on it, and terms which says that their notices are to be preserved when copies are made.

This isn't good enough for the Altmans and Amodeis of the world.

> it's an issue due to the effect that students won't learn well if they just copy everything

... and fraudulently obtain professional licensing, and use that to cause harm: medical malpractice, unsafe engineering.

It is fraud.


None of what you said shows how it's an issue, beyond "it just is." Doctors for example "plagiarize" all the time, copying standardized diagnostic protocols, clinical notes from previous visits, and peer-reviewed treatment plans. The risk is in the information actually being wrong rather than them having "original" expression (which might even be worse, where they try some "novel" treatment and end up killing the patient). There is no fraud involved as the effects of plagiarism which is, again, a completely fictional issue.

I am also not sure why you keep bringing up Altman et al, I really don't give a shit what they are talking about, that is not what I am discussing. You for some reason keep trying to inject your views on these people when they are not relevant to the points I made which are about the theoretical concepts of machine learning and training, and its intersection with intellectual property. I am not interested in your opinions on these people, and they are not the only ones who stand to benefit from democratization of AI models and publishing of weights for the public.

Anyway, I think we both fundamentally have different views on the freedom of information and the fallacious nature of IP that cannot be changed online so I will bid you a good day and won't continue this conversation further, as I don't think it's productive for either of us.


Because IP democratizes returns on the creative process.

Maybe it used to but with companies like Disney lengthening copyright times way beyond the original intention, or corporations patenting absurd things, it seems to be more of a way to entrench power than any sort of democratization. I'm glad generative AI seem to be bypassing all this and actually democratizing returns on the creative process, by flagrantly violating the concept of IP.

In the case of BSD-like licenses, IP is applied in a way that discourages plagiarism, while giving all the practical freedoms to the users, including making proprietary products.

In the case of copyleft licenses like GPL, IP is applied in a way to ensure that users have the code.

These things are taken away when the code is laundered through AI.


Again, start talking to people outside the field of programming and ask them how they like it when their labor of passion is "democratized" by AI turning it into unattributable slurry.

I don't really care how they like it because it's not up to them how I use the tools I want to use. It's literally the same argument photographers faced 100 years ago and in another 100 years I guarantee no one will be talking about AI in the terms you are today.

Even today, in 2026, it is possible to use photography in ways that infringe copyright! You literally cannot just snap your shutter over anything whatsoever and call it yours!

No one started photographing paintings and declaring them free to use. If they did the lawsuits would leave a huge impact crater.

Photography started displacing painting as a form of portraiture, but displacing a technique is not the same thing as appropriating the work itself.


I don't see any issues with "appropriating" a work especially if it's not a one to one copy which AI does not produce (without out some pretzel level prompting), especially with regards to visual media (what even is appropriation in this case? Your example of photographers taking images of paintings is not the same as how AI training occurs). In other words, training is and should be free and fair use.

> training is and should be free and fair use.

Of course the AI robber barons would that it be so, but it must not be and should not be.

Training gobbles up works in their entirety, verbatim.

Fair use of the verbatim words of a written work requires the excerpt to be small.

Fair use also usually requires attribution, which is missing.

Transformative works like parodies are also fair use, but the LLM isn't transformative int his sense; it's strawman transformative like a meat grinder.

Parodies use the structure of something existing, as a vehicle for original thought which is why they are protected from copyright claims by the authors of whatever is pariodied.


Again, IP is an outdated concept in this day and age. In all honestly there shouldn't even be the notion of fair use, any transformative work should be allowed. There is nothing about LLM training that isn't transformative, just as, well, grinding meat from a steak into stuffed sausages transforms it.

I'm not even talking about big corporations with proprietary models, in fact I oppose their not being open source or weight, I want more open models not fewer as that at least democratizes the value of LLMs. The worst case is having copyright hawks allowing regulatory capture by big AI corps by pushing regulations about licensing content, which, of course, no open model company will be able to afford in the future. I find that infinitely worse than having more lax copyright laws, where only a few corporations can tell you want to think via usage of their LLMs.

Lastly, no one can tell me from first principles why LLM training is bad, on the copyright side, other than, it just is, because copyright law dictates it so. Perhaps copyright law is what needs to be abolished, not LLMs.


"Transformative" has a specific meaning under the fair use doctrine. You can't just Rot13 or gzip someone's novel and call that transformative.

> Perhaps copyright law is what needs to be abolished, not LLMs.

Sure, now that it's inconvenient for some billionaires --- who themselves have nothing to protect, because everything they offer is a service the user can only access through the network, while they have a subscription.


I'm talking about the concept of transformation, not the specific legal language, which, again, I said is not worth discussing, because the legal concept of intellectual property is not useful.

No, not just now, since forever. I suppose Stallman being right all along is about this concept. And just to be clear, I'm not a supporter of current closed source AI companies, like I said I want to see open models succeed.

As I asked above, it really does look like no one can explain why LLM training is bad, besides saying it's bad. Therefore I will continue to reject IP as a concept.


Obviously, since you reject IP, presumably you would be okay to copy and paste code out of some GNU program into your own program, without attribution, and then, if you feel like it, release that program under the least restrictive terms possible (as close to the public domain as you could practically get away with).

So discussions revolving about doing so less directly through training a model just add distracting details that don't matter.

If everyone did that (due to there not being any rules against that), then fewer people would write programs under free licenses. Many such developers are volunteers, whose only payment is that the work product is theirs to license how they want.

Having that taken away from us is discouraging.

We haven't done anything to deserve such a "fuck you".


I’m with you here.

I grew up without a mentor and my understanding of software stalled at certain points. When I couldn’t get a particular os API to work, in Google and stack overflow didn’t exist, and I had no one around me to ask. I wrote programs for years by just working around it.

After decades writing software I have done my best to be a mentor to those new to the field. My specialty is the ability to help people understand the technology they’re using, I’ve helped juniors understand and fix linker errors, engineers understand ARP poisoning, high school kids debug their robots. I’ve really enjoyed giving back.

But today, pretty much anyone except for a middle schooler could type their problems into a ChatGPT and get a more direct answer that I would be able to give. No one particularly needs mentorship as long as they know how to use an LLM correctly.


Today every single software engineer has an extremely smart and experienced mentor available to them 24/7. They don't have to meet them for coffee once a month to ask basic questions.

That said, I still feel strongly about mentorship though. It's just that you can spend your quality time with the busy person on higher-level things, like relationship building, rather than more basic questions.


How would this affect future generations of ... well anyone, when they have 24/7 access to extremely smart mentor who will find solution to pretty much any problem they might face?

Can't just offload all the hard things to the AI and let your brain waste away. There's a reason brain is equated to a muscle - you have to actively use it to grow it (not physically in size, obviously).


I agree with you about using our brains. I honestly have no idea.

But I can tell you that, just like with most things in life, this is yet another area where we are increasingly getting to do just the things we WANT to do (like think about code or features and have it appear, pixel pushing, smoothing out the actual UX, porting to faster languages) and not have to do things most people don't want to do, like drudgery (writing tests, formatting code, refactoring manually, updating documentation, manually moving tickets around like a caveman). Or to use a non tech example, having to spend hours fixing word document formatting.

So we're getting more spoiled. For example, kids have never waited for a table at a restaurant for more than 20 mins (which most people used to do all the time before abundant food delivery or reservation systems). Not that we ever enjoyed it, but learning to be bored, learning to not just get instant gratification is something that's happening all over in life.

Now it's happening even with work. So I honestly don't know how it'll affect society.


Just because you have every instruction manual doesn't mean you can follow and perform the steps or have time to or can adapt to a real world situation.

"No one particularly needs mentorship as long as they know how to use an LLM correctly."

The "as long as they know how..." is doing a lot of work there.

I expect developers with mentors who help give them the grounding they need to ask questions will get there a whole lot faster than developers without.


I have this feeling as well. At one point I thought when I got older it might be nice to teach - Steve Wozniak apparently does. But, it doesn't feel like I can really add much. Students have infinite teachers on youtube, and now they have Gemini/Claude/ChatGPT which are amazing. Sure, today, maybe, I could see myself as mostly a chaperone in some class to once in a while help a student out with some issue but that possibility seems like it will be gone in 1 to 2 years.

> Any idiot can now prompt their way to the same software.

No they can't. They think they can, but they will still need to put in the elbow grease to get it done right.

But, in my case (also decades of experience), I have had to reconcile with the fact that I'll need to put down the quill pen, and learn to use a typewriter. The creativity, ideas, and obsession with Quality are still all mine, but the execution is something that I can delegate.


This.

LLMs don't always produce correct code - sometimes it's subtly wrong and it takes an expert to notice the mistake(s).


As an idiot, I am very aware that Claude can help me, but also very aware I am not an experienced SWE and continue to seek out their views.

Short answer: use your expertise in complex project.

Story: I'm dev for about 20 years. First time I had totally the same felling when desktop ui fading away in favor of html. I missed beauty of c# winforms controls with all their alignment and properties. My experience felt irrelevant anymore. Asp.net (framework which were sold as "web for backed developers") looked like evil joke.

Next time it have happened with the raise of clouds. So were all my lovely crafted bash scripts and notes about unix command irrelevant? This time however that was not that personal for me.

Next time - fall of scala as a primary language in big data and its replacement with python. This time it was pretty routine.

Oh and data bases... how many times I heard that rdbms is obsolete and everybody should use mongo/redis/clickhouse?

So learn new things and carry on. Understanding how "obsolete" things works helps a lot to avoid silly mistake especially in situation when world literally reinvent bicycle


It's not black/white. There's are scales of complexity and innovation, and at the moment, the LLMs are mostly good (with obvious caveats) at helping with the lower end of the complexity scale, and arguably almost nowhere on the innovation scale.

If, as a principal engineer, you were performing basic work that can easily be replicated by an LLM, then you were wasted and mistasked.

Firstly, high-end engineers should be working on the hard work underlying advances in operating systems, compilers, databases, etc. Claude currently couldn't write competitive versions of Linux, GCC (as recently demonstrated), BigQuery, or Postgres.

Secondly, and probably more importantly, LLMs are good at doing work in fields already discovered and demonstrated by humans, but there's little evidence of them being able to make intuitive or innovative leaps forwards. (You can't just prompt Claude to "create a super-intelligent general AI"). To see the need for advances (in almost any field) and to make the leaps of innovation or understanding needed to achieve those advances still takes smart (+/- experienced) humans in 2026. And it's humans, not LLMs, that will make LLMs (or whatever comes after) better.

Thought experiment: imagine training a version of Claude, only all information (history, myriad research, tutorials, YouTube takes and videos, code for v1, v2, etc.) related to LLMs is removed from the training data. Then take that version and prompt it to create an LLM. What would happen?


I'm surprised that as a principal engineer, you view your greatest skill set as your expertise in programming. While that is certainly an enormous asset, I have never met a principal engineer that hadn't also mastered how to work within the organization to align the right resources to achieve big goals. Working with execs and line managers and engineers directly to bring people together to chase something complex and difficult: that skill is not going to be replaced by LLMs and remains extremely valuable.

I'm not a principal, but I would wonder: if AI increases every "coder's" productivity, say, 5x doesn't that replace some teams with 1 person, meaning less "alignment" necessary? Some whole org layers may disappear. Soft skills become less relevant when there are fewer people to interface with.

Even regarding "chase something complex and difficult", there are currently only so many needs for that, so I think any given person is justified fearing they won't be picked. It may be several years between AI eating all the CRUD work from principal down, and when it expands the next generation of complex work on robotics or whatever.

Also, to speak on something I'm even less qualified – the economy feels weak, so I don't have a lot of hope for either businesses or entrepreneurs to say "Let's just start new lines of business now that one person can do what used to take a whole team." The businesses are going to pocket the safe extra profits, and too many entrepreneurs are not going to find a foothold regardless how fast they can code.


I echo another reply here, if anything my experience coding feels even more valuable now.

It was never about writing the code—anyone can do that, students in college, junior engineers…

Experience is being able to recognize crap code when you see it, recognizing blind alleys long before days or weeks are invested heading down them. Creating an elegant API, a well structured (and well-organized) framework… Keeping it as simple as possible that just gets the job done. Designing the code-base in a way that anticipates expansion…

I've never felt the least bit threatened by LLMs.

Now if management sees it differently and experienced engineers are losing their jobs to LLMs, that's a tragedy. (Myself, I just retired a few years ago so I confess to no longer having a dog I this race.)


Sorry for the dumb question but how could you feel threatened by LLMs if you retired just a few years ago? Considering the hype started somewhere in 2022-2023.

You're right, as I say, I no longer have skin in the game.

Retired, I have continued to code, and have used Claude to vibe code a number of projects—initially I dod so out of curiosity as to how good LLM are, and then to handle things like SwiftUI that I am hesitant to have to learn.

It's true then that I am not in a position of employment where I have to consider a performance review, pleasing my boss or impressing my coworkers. I don't doubt that would color my perception.

But speaking as someone who has used LLMs to code, while they impress me, again, I don't feel the threat. As others have pointed out in past threads here on HN, on blogs, LLMs feel like junior engineers. To be sure they have a lot of "facts" but they seem to lack… (thinking of a good word) insight? Foresight?

And this too is how I have felt as I was aging-out of my career and watched clever, junior engineers come on board. The newness, like Swift, was easy for them. (They no doubt have rushed headlong into Swift UI and have mastered it.) Never though did I feel threatened by them though.

The career itself, I have found, does in fact care little for "grey beards". I felt by age 50 I was being kind of… disregarded by the younger engineers. (It was too bad, I thought, because I had hoped that on my way out of the profession I might act more as mentor than coder. C'est la vie!)

But for all the new engineer's energy and eagerness, I was comfortable instead with my own sense of confidence and clarity that came from just having been around the block a few times.

Feel free to disregard my thoughts on LLMs and the degree to which they are threatening the industry. They may well be an existential threat. But, with junior engineers as also a kind of foil, I can only say that I still feel there is value in my experience and I don't disparage it.


and they only got really good like last December.

how would you suggest someone who just started their career moves ahead to build that “taste” for lean and elegant solutions? I am onboarding fresh grads onto my team and I see a tendency towards blindly implementing LLM generated code. I always tell people they are responsible for the code they push, so they should always research every line of code, their imported frameworks and generated solutions. They should be able to explain their choices (or the LLM’s). But I still fail to see how I can help people become this “new” brand of developer. Would be very happy to hear your thoughts or how other people are planning to tackle this. Thanks!

My "taste" (like perhaps all other "tastes") comes from experience. Cliche, I know.

When you have had to tackle dozens of frameworks/libraries/API over the years, you get to where you find you like this one, dislike that one.

Get/Set, Get/Set… The symmetry is good…

Calling convention is to pass a dictionary: all the params are keys. Extensible, sure, but not very self-documenting, kind of baroque?

An API that is almost entirely call-backs. Hard to wrap your head around, but seems to be pretty flexible… How better to write a parser API anyway?

(You get the idea.)

And as you design apps/frameworks yourself, then have to go through several cycles of adding features, refactoring, you start to think differently about structuring apps/frameworks that make the inevitable future work easier. Perhaps you break the features of a monolithic app into libraries/services…

None of this is novel, it's just that doing enough of it, putting in the sweat and hours, screwing up a number of times) is where "taste" (insight?) comes from.

It's no different from anything else.

Perhaps the best way to accelerate the above though is to give a junior dev ownership of an app (or if that is too big of a bite, then a piece of a thing).

"We need an image cache," you say to them. And then it's theirs.

They whiteboard it, they prototype it, they write it, they fix the bugs, they maintain it, they extend it. If they have to rewrite it a few times over the course of its lifetime (until it moves into maintenance mode), that's fine. It's exactly how they'll learn.

But it takes time.


This answer probably feels unsatisfying and I agree. But some things actually need repetition and ongoing effort. One of my favorite quotes is from Ira Glass about this very topic.

> Nobody tells this to people who are beginners, and I really wish somebody had told this to me.

> All of us who do creative work, we get into it because we have good taste. But it's like there is this gap. For the first couple years that you're making stuff, what you're making isn't so good. It’s not that great. It’s trying to be good, it has ambition to be good, but it’s not that good.

> But your taste, the thing that got you into the game, is still killer. And your taste is good enough that you can tell that what you're making is kind of a disappointment to you. A lot of people never get past that phase. They quit.

> Everybody I know who does interesting, creative work they went through years where they had really good taste and they could tell that what they were making wasn't as good as they wanted it to be. They knew it fell short. Everybody goes through that.

> And if you are just starting out or if you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Do a huge volume of work. Put yourself on a deadline so that every week or every month you know you're going to finish one story. It is only by going through a volume of work that you're going to catch up and close that gap. And the work you're making will be as good as your ambitions.

> I took longer to figure out how to do this than anyone I’ve ever met. It takes awhile. It’s gonna take you a while. It’s normal to take a while. You just have to fight your way through that.

> —Ira Glass


As a Principal SWE, who has done his fair share of big stuff.

I'm excited to work with AI. Why? Because it magnifies the thing I do well: Make technical decisions. Coding is ONE place I do that, but architecture, debugging etc. All use that same skill. Making good technical decisions.

And if you can make good choices, AI is a MEGA force multiplier. You just have to be willing to let go of the reins a hair.


As a self teaching beginner* this is where I find AI a bit limiting. When I ask ChatGPT questions about code it is always about to offer up a solution, but it often provides inappropriate responses that don't take into account the full context of a project/task. While it understands what good structure and architecture are, it's missing the awareness of good design and architecture and applying to the questions I have, and I don't have have the experience or skill set to ask those questions. It often suggests solutions (I tend to ask it for suggestions rather than full code, so I can work it out myself) that may have drawbacks that I only discover down the line.

Any suggestions to overcome this deficit in design experience? My best guess is to read some texts on code design or alternatively get a job at a place to learn design in practice. Mainly learning javascript and web app development at the moment.

*Who has had a career in a previous field, and doesn't necessarily think that learning programming with lead to another career (and is okay with that).


I can't summarize 40~ YOE of programming easily. (30+ professional)

I can tell you: Your problems are a layer higher than you think.

Coding, Architecture, etc. Those get the face time. Process, and Discipline, and where the money is made and lost in AI.

To give a minor example: My first attempt at a major project with AI failed HORRIBLY. But I stepped back and figured out why. What short-comings did my approach have, what short-comings did the AI have. Root Cause Analysis.

Next day I sat down with the AI and developed a PLAN of what to do. Yes, a day spent on a plan.

Then we executed the plan. (or it did and I kept it on track, and fixed problems in the plan as things happened.) On the third day I'd completed a VERY complex task. I mean STUPIDLY complex, something I knew WHAT I wanted to do, and roughly how, but not the exact details, and not at the level to implement it. I'm sure 1-2 weeks of research could have taught me. Or I could let the AI do it.

... And that formed my style of working with AI.

If you need a mentor pop in the Svalboard discord, and join #sval-dev. You should be able to figure out who I am.


I'm in the hard disagree camp. I'm heading towards late 60s now, and have been writing software for all of my working life.

I am wondering how your conclusions are so different from mine. One is you only write "in the small [0]". LLMs are at least as good as a human at turning out "web grade" software in the small. Claude CLI is as good an example of this sort of software as anything. Every week or two I hit some small bug. This type of software doesn't need a "principal software engineer".

The second is you never used an LLM to write software in the large. LLMs are amazing things, far far better than humans at reading and untangling code. You can give some obfuscated javascript and they regurgitate commented code with self explanatory variable and function names in a minute or two. Give them a task and they will happily spit out 1000s of lines of code in 10 minutes or so which is amazing.

Then you look closer, and it's spaghetti. The LLM has no trouble understanding the spaghetti of course, and if you are happy to trust your tests and let the LLM maintain the thing from then on, it's a workable approach.

Until, that is, it gets large enough for a few compile loops to exceed the LLM's context window, then it turns to crap. At that point you have to decompose it into modules it can handle. Turns out decomposition is something current LLMs (and junior devs) are absolutely hopeless at. But it's what a principal software engineer is paid to do.

The spaghetti code is the symptom of that same deficiency. If they decide they need code to do X while working on concept Y, they will drop the code for X right beside the code for Y, borrowing state from Y as needed. The result is a highly interconnected ball of mud. Which the LLM will understand perfectly until it falls off the context window cliff, then all hope is lost.

While LLM ability to implement a complex request as simple isolated parts remains, a principal engineer's job is safe. In fact given LLMs are accelerating things, my guess is the demand will only grow. But I suspect the LLM developers are working hard at solving this limitation.

[0] https://en.wikipedia.org/wiki/Programming_in_the_large_and_p...


> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.

My experience is the opposite. Those with a passion for the field and the ability to dig deeply into systems are really excited right now (literally all that power just waiting to be guided to do good...and oh does it need guidance!). Those who were just going through the motions and punching a clock are pretty unmotivated and getting ready to exit.

Sometimes I dream about being laid off from my FAANG job so I have some time to use this power in more interesting than I'm doing at work (although I already get to use it in fairly interesting ways in my job).


I wouldn’t say the pessimists fall into that category.

In my experience they are mostly the subset of engineers who enjoyed coding in and of itself and ——in some cases—— without concern for the end product.


I consider myself very good at writing software. I built and shipped many projects. I built systems from zero. Embedded, distributed, SaaS- you name it.

I'm having a lot of fun with AI. Any idiot can't prompt their way to the same software I can write. Not yet anyways.


With all due respect. If _any idiot_ can prompt their way to the _same_ software you’d have written, and your primary value proposition is to churn out code, then you’re… a bit of an outlier when it comes to principal engineers.

It's more common than you might think.

Good engineers are way more important than they’ve ever been and the job market tells the story. Engineering job posts are up 10% year over year. The work is changing but that’s what happens when a new technology wave comes ashore. Don’t give up, ride the new wave. You’re uniquely qualified.

IMHO any idiot can create a piece of crap. It takes experience to create good software. Use your experience Luke! Now you have a team of programmers to create what ever you fancy! Its been great for me, but I have only been programming C++ for 36 years.

I am sorry you feel that way but I feel professionally strongly insulted by your statement.

Specifically the implication high LLM affinity implies low professional competence.

"My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM."

Strong disagree.

I've earned my wings. 5 years realtime rendering in world class teams. 13 years in AEC CAD developing software to build the world around us. In the past two years I designed and architected a complex modeling component, plus led the initial productization and rendering efforts, to my employers map offering.

Now I've managed to build in my freetime the easy-to-use consumer/hobbyist CAD application I always wanted - in two years[0].

The hard parts, that are novel and value adding are specific, complex and hand written. But the amount on ungodly boilerplate needed to implement the vision would have taken either a) team and funding or b) 10 years.

It's still raw and alpha and it's coming together. Would have been totally impossible without Claude, Codex and Cursor.

I do agree I'm not an expert in several of the non-core technologies used - webview2 for .net for example, or xaml. But I don't have to be. They are commodity components, architected to their specific slot, replaceable and rewritable as needed.

As an example of component I _had_ professional competence 15 years ago - OpenGL - I don't need to re-learn. I can just spec quickly the renderpasses, stencil states, shader techniques etc etc and have the LLM generate most of that code in place. If you select old, decades old technlogies and techniques and know what you want the output is very usable most of the time (20 year old realtime rendering is practically already timeless and good enough for many, many things).

[0] https://www.adashape.com/


Why would I need this tool if I can just say "Claude, make me a CAD drawing of XYZ"?

Not trying to be rude, just generating some empathy for the OP's situation, which I think was missed: Like them, there is something you are passionate about that there is no longer really a point to. You could argue "but people will need to use my tool to generate really _good_ CAD drawings" but how much marginal value does that create over getting a "good enough" one in 2 minutes from Claude?

I feel sorry for bringing this up, but I think you might have missed how the thing that makes this possible makes it unnecessary.


No need to be sorry - you raise an excellent point!

Note my critique was labeling all of us LLM enthusiasts by association ”incompetents” which I believe is an incorrect assumption.

The point raised that more people can now code I think was a correct one though. I think that’s a net benefit.

Let me be brief. There are two topics here - CAD & AI and AI & society which I think the underlying point we are discussing.

I appreciate you made a domain specific example, but like _all_ AI workflows - it does not really hold up unless one is extremely specific what the workflow is.

First of all if someone is making a CAD tool for drawings that’s really not a segment. All 3D design tools target a specific content workflow, with specific domain model. Drawings are one possible output from this domain model - just like the on-screen 3D presentation or a 3MF file you get for export.

What ever LLM competency level is it does not come with it’s own domain model. Real people want to configure the models they create. This means there needs to be a domain model you hook up to the LLM to have stable model with specific editable components.

So if you are prompting a model, you are still better off if you prompt the domain model in a real cad package.

So I don’t think CAD packages will die.

Second - I’m mainly trying to serve _my_ need (which I believe is shared by others). My need is that I want to design 3D models with minimum effort, in an enviroment that has perfect undo, perfect boolean, versioning, snaphshotting and intuitive parametricity. This package did not exist in the market before.

Will it have traction? I would expect there are lot of human users that want to create models themselves. Computer chess did not kill chess etc.

To be super specific, there is a clear wedge in the market between Tinkercad and Fusion360 for an affordable desktop offering with the above features.

I do realize my market thesis is just a hypothesis at this point. Which is fine - it’s a passion project. I hope it will be usefull for others, but if not, at least I will have the tool I want.

I’m mainly excited about the possibility of being able to ship to test my market hypothesis.

Without LLM tools I would not be able to ship.

Regarding society:

I believe we are discussing a normal destructive phaze of innovation cycle. Machine looms, weavers, luddites, new forms of labour etc.

Regarding living standards the main worry is - can ”normal” people exist above poverty?

I guess the markets will want to have consumers in the future so either there will be new jobs or some form of basic income.

It’s possible I’m wrong as well.

I have no idea if democracies will survive.


You don't know what you don't know.

Playing with Claude, if you tell it to do something, it'll produce something. Sometimes it's output is ok, sometimes it's not.

I find I need to iterate with Claude, tell it no, tell it how to improve it's solution or do something in a different way. It's kind of like speed running iterating over my ideas without spending a few hours doing it manually, writing lots of code then deleting it to end with my final solution.

If I had no prior coding knowledge i'd go with what ever the LLM gave me and end up with poor quality applications.

Knowing how to code gives you the advantage still using an LLM. Saying that, i'm pessimistic what my future holds as an older software engineer starting to find age/experince is an issue when an employer can pay someone less with less experience to churn out code with prompts when a lot of time the industry lives by "it's good enough".


Same here, although hopefully won't be retiring soon.

What's missing from this is that iconic phrase that all the AI fans love to use: "I'm just having fun!"

This AI craze reminds me of a friend. He was always artistic but because of the way life goes he never really had opportunity to actively pursue art and drawing skills. When AI first came out, and specifically MidJourney he was super excited about it, used it a lot to make tons and tons of pictures for everything that his mind could think of. However, after awhile this excitement waned and he realized that he didn't actually learn anything at all. At that point he decided to find some time and spend more time practicing drawing to be able to make things by himself with his own skills, not by some chip on the other side of the world and he greatly improved in the past couple of years.

So, AI can certainly help create all the "fun!!!" projects for people who just want to see the end result, but in the end would they actually learn anything?


I mean. Sounds like the guy had existing long term goals, needed to overcome an activation threshold, and used AI as a catalyst to just get started. Seems like, behaviorally, AI was pivotal for him to learn things, even if the things he learned came from elsewhere / his own effort.

I suppose, yes, AI was like a kickstart. But the point is - he didn't just stick to AI, he realized that in terms of skill and fulfillment it's a no-go direction. Because you neither learn anything, nor create anything yourself.

I feel the same way. But this is a new economy now, software is cheap, and regarding the skill and fulfillment you derive writing it yourself, to quote Chris Farley: "that and a nickel will get you a nice hot cup of JACK SQUAT!!!"

> Any idiot can now prompt their way to the same software.

You sound quite jaded. The people I see struggling _the most_ at prompting are people who have not learned to write elegantly. HOWEVER, a huge boon is that if you're a non-native English speaker and that got in your way before, you can now prompt in your native language. Chinese speakers in particular have an advantage since you use fewer tokens to say the same thing in a lot of situations.

> Talk about a rug pull!

Talk to product managers and people who write requirements for a living. A PM at MSFT spoke to me today about how panicked he and other PMs are right now. Smart senior engineers are absorbing the job responsibilities of multiple people around them since fewer layers of communication are needed to get the same results.


I see this at my workplace. The PMs and BAs are now completely redundant since you can prompt your way to decent specs with the right access and setup.

> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.

My greatest frustration with AI tools is along a similar line. I’ve found that people I work with who are mediocre use it constantly to sub in for real work. A new project comes in? Great, let me feed it to Copilot and send the output to the team to review. Look, I contributed!

When it comes time to meet with customers let’s show them an AI generated application rather than take the time to understand what their existing processes are.

There’s a person on my team who is more senior than I am and should be able to operate at a higher level than I can who routinely starts things in an AI tool but then asks me to take over when things get too technical.

In general I feel it’s all allowed organizations to promote mediocrity. Just so many distortions right now but I do think those days are numbered and there will be a reversion to the mean and teams will require technical excellence again.


Yes, the LLM can write it. No, the LLM cannot architect a complex system and weave it all together into a functioning, workable, tested system. I have a 400 table schema networked together with relationships, backrefs, services, well tested, nobody could vibe code their way to what I've built. That kind of software requires someone like yourself to steer the LLM.

I understand your feelings. You spent years working hard to learn and master a complex craft, and now seeing that work feel almost irrelevant because of AI can be deeply unsettling.

However, this can also be an opportunity to gain some understanding about our nature and our minds. Through that understanding, we can free ourselves from suffering, find joy, and embrace life and the present moment as it is.

I am just finishing the book The Power of Now by Eckhart Tolle, and your comment made me think about what is explained in it. Tolle talks about how much of our suffering comes from how deeply we (understandably) tie our core identity and self-worth to our external skills, our past achievements, and our status among peers.

He explains that our minds construct an ego, with which we identify. To exist, this ego needs to create and constantly feed an image of itself based on our past experiences and achievements. Normally we do this out of fear, in an attempt to protect ourselves, but the book explains that this never works. We actually build more suffering by identifying with our mind-constructed ego. Instead of living in the present and accepting the world as it is, we live in the past and resist reality in order to constantly feed an ego that feels menaced.

The deep expertise you built is real, but your identity is so much more than just being a 'principal engineer'. Your real self is not the mind-constructed ego or the image you built of yourself, and you don't need to identify with it.

The book also explores the Buddhist concept that all things are impermanent, and by clinging to them we are bound to suffer. We need to accept that things come and go, and live in the present moment without being attached to things that are by their nature impermanent.

I suggest you might take this distress you are feeling right now as an opportunity to look at what is hurting inside you, and disidentify yourself from your ego. It may bring you joy in your life—I am trying to learn this myself!


I'm reading The Compassionate Mind by Paul Gilbert and I find it shares many similar ideas. Also I've been interested by Buddhist concepts like impermanency for a while.

While I think rationally what you said is good and makes sense, at the same time it feels like it says you should forget your roots and be this impermanent being existing in the present and only the present. I value everything about my life, the past, my role models when I was a kid, my past and current skills, all friends from all ages, my whole path essentially. When considering current choices I have to make, I feel more drawn to think "What has been my path and values previously, and what makes sense now?" instead of forgetting the past and my ego and just hustling with the $CURRENT technology.

At least that's how I have thought about my ego when I have tried to approach it with topics like these. It might allow me to make more money in the present if I just disidentified with it, but that thought legitimately feels horrifying because it would mean devaluing my roots.

Interested to hear your take on this.


I think that's right when you say: "What has been my path and values previously, and what makes sense now?" That is actually a sensible way to approach the present moment.

Disidentifying from your ego doesn't mean you have to act like a stateless robot with amnesia. Your past experiences, your role models, and your skills are still there for you to recall; they are tools that help guide your decisions. Disidentifying just means you don't let the mind-constructed image of those things define who you are. It means you don't have to constantly mull over the past, and you don't feel threatened when the things you valued in the past ends or changes.

However, I was really struck by your comment that disidentifying would feel horrifying because it would mean "devaluing your roots" to make more money. I am wondering if this is what you really think.

Imagine if letting go of that specific past identity led you to a truly marvelous opportunity in the present: not just more money, but working with wonderful people, doing engaging things, and being genuinely happy. Would that really be horrifying just because it didn't perfectly align with your roots? Probably not.

I suspect what you actually find horrifying isn't "devaluing your roots," but rather the idea of selling out. The real nightmare is getting a well-paid but completely soulless job where you are unhappy, working on things you don't care about, or being treated like a disposable cog who just takes orders.

Just my two cents, I am no spiritual guide!



Even as a principal engineer, there is an infinite number of things you don't know.

Suppose you get out of your comfort zone to do something entirely new; AI will be much more helpful for you than it is for people who spent years developing their skills.

AI is the great equalizer.


> As a principal engineer I feel completely let down. I've spent decades building up and accumulating expert knowledge and now that has been massively devalued. Any idiot can now prompt their way to the same software. I feel depressed and very unmotivated and expect to retire soon. Talk about a rug pull!

Really?

The vibe coders are running into a dark forest with a bunch of lobsters (OpenClaw) getting lost and confused in their own tech debt and you're saying they can prompt their way to the same software?

Someone just ended up wiping their entire production database with Claude and you believe that your experience is for nothing, towards companies that need stable infrastructure and predictability.

Cognitive debt is a real thing and being unable to read / write code that is broken is going to be an increasing problem which experienced engineers can solve.

Do not fall for the AI agent hype.


> Do not fall for the AI agent hype.

Problem is, it's the people in higher positions who should be aware of that, except they don't care. All they would see is how much more profit company can make if it reduces workforce.

Plenty of engineers do realize that AI is not some magical solution to everything - but the money and hype tends to overshadow cooler heads on HN.


This is exactly it. The junior and mids on my team produce Junior and mid quality level vibe code.

Too generic prompts, unaccounted edge casez, inattentive code reviews...


I might be wrong but this sounds like an ego issue more than anything. Twice you berated less skilled programmers. I’m skilled as well and it did sting when I realized that a relatively new technology could beat me. But there’s so much more to it, especially PMs. PMs find big high value problems and solve them. The coding should be the easy part. If your coding skills are such a big part of your identity and you enjoy the feeling of superiority, a good therapist (chatgpt maybe lol) could be useful.

> I've spent decades building up and accumulating expert knowledge and now that has been massively devalued.

That remains to be seen. There's a huge difference between an experienced engineer using LLMs in a controlled way, reviewing their code, verifying security, and making sure the architecture makes sense, and a random person vibecoding a little app - at least for now.

Maybe that will change in a year or two or five or never, but today LLMs don't devalue expert knowledge. If anything, LLMs allow expert programmers to increase productivity at the same level of quality, which makes them even more valuable compared to entry-level programmers than they were before.


I don't find the same, like you, principle/CTO engineer, there's a world of difference between simplistic prompt/vibe coding and building a properly architected/performant/maintainable system with agentic coding.

I am a principal engineer too. In the last 5 months I have been working on a project using the latest LLMs. 5 years ago that project would have required 30 engineers. Now I am alone but need at least 5 more months to have an MVP. You are just not working on projects that are complex and difficult enough. There are so many projects that I have in mind that feel within reach and I would have never considered 5 years ago.

> Any idiot can now prompt their way to the same software.

Not only it would be good if true, but it is also not true. Good programmers learn how to build things, for the most part, since they know what to build, and have a general architectural idea of what they are going to build. Without that, you are like the average person in the 90s with Corel Draw in their hands, or the average person with an image diffusion model today: the output will be terrible because of lack of taste and ideas.


Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning. Capabilities without these foundations produce fragile, short-lived results. Only those who anchor their work in proper abstractions are actually engineering, no matter who’s writing the code.

Same level of engineer here - I feel that the importance of expertise has only increased, just that the language has changed. Think about the engineer who was an expert in Cobol and Fortran but didn't catch the C++ / Java wave. What would you say to them?

LLMs goof up, hallucinate, make many mistakes - especially in design or architecting phase. That's where the experience truly shines.

Plus, it let's you integrate things that you aren't good at (UI for me).


youre getting it backwards. anyone can get to something that looks alright in a browser... until you actually click something and it fails spectacularly, leaks secrets, doesn't scale beyond 10 users and is a swamp of a codebase that prevents clean ongoing extension = hard wall for non techies, suddenly the magical LLM stops producing results and makes things worse.

All this senior engineering experience is a critical advantage in these new times, you implicitly ask things slightly different and circumvent these showstoppers without even thinking if you are that experienced. You don't even need to read the code at all, just a glimpse in the folder and scrolling a few meters of files with inline "pragmatic" snippets measured in meters and you know its wrong without even stepping through it. even if the autogenerated vanity unit tests say all green.

Don't feel let down. Slightly related to when Google sprung into existence - everyone has access and can find stuff, but knowing how to search well is an art even today most people don't have, and makes dramatic differences in everyday usage. Amplified now with the AI search results even that often are just convincing nonsense but most people cannot see it. That intuitive feel from hard won experience about what is "wrong" even without having an instant answer what would be "right" is getting more and more the differentiator.

Anyone can force their vibe coded app into some shape thats sufficient for their own daily use and they're used to avoiding their own pitfalls of the tool they created and know are there, but as soon as there's some kind of scaling (scope, users, revenue, ...) involved, true experts are needed.

Even the new agent tools like Claude for X products at the end perform dramatically different in the hands of someone who knows the domain in depth.


I feel it is about being disinterested than about being good. the ones who were not interested(whether good or bad) and were trapped in a job are liberated and happy to see it be automated.

The ones who are frustrated are the ones who were interested in doing(whether good or bad) but are being told by everyone that it is not worth it do it anymore.


Nah - I've also spent decades trying to become the best software developer I can and now it is giving me enormous power. What used to take me 5 days is now taking me a day, and my output is now higher quality. I now finish things properly with the docs, and the nooks and crannies before moving on.

What used to take incompetent developers 5 days - it is still taking them 5 days.


I love it. I can't stand this sentiment and this type of technologist pompous ass. You are why software mostly sucks. You have no imagination. Hopefully the models make your limited, extraordinarily overvalued skill set the last 20 years completely democratized. We will see who is the idiot going forward.

Spoken like a loser and a thief.

The best programmers I know are the ones most excited about it.

The mediocre programmers who are toxic gate keepers seem to be the ones most upset by it.


Yeah right. Only mediocre people like Rob Pike would be a toxic gate keeper.

The reality is that in the theft of Chardet at least 2000 people supported Mark Pilgrim and almost no one supported the three programmers who constantly blog about AI and try to reprogram people.

Incidentally, everyone who unironically uses the word "gate keeper" is mediocre.


definitely. With AI I can stop working on the painful tasks and spend much more time on things that matter most to me: building the right abstractions, thinking about the maths, talking to the customer...

But TBH, I have been a bit "shocked" by AI as well. It's much more troubling that the coming of the internet. But my hope is that having worked with AI extensively for the past 1-2 years, I'm confident they miss the important things: how to build the abstractions to solve the non-code constraints (like ease of maintenance, explainability to others, etc.)

And the way it goes at the moment shows no sign of progress in that area (throwing more agents at a problem will not help).


As a senior engineer if your value add was "accumulated expert knowledge". Then yes, you are in a bad place.

If instead it was building and delivering products / business value. Good judgement, coordination and communication skills, intuition, etc… then you are now way way more leveraged than you ever were and it has never been greater.


I think "accumulated expert knowledge" was never really useful if an organisation could just replace that person with a wiki.

I fancy myself pretty good at writing software, and here's my path in:

All the tools I passed up building earlier in my career because they were too laborious to build, are now quite easy to bang out with Claude Code and, say, an hour of careful spec writing...


> Any idiot can now prompt their way to the same software.

They simply can't in my experience. Most people cannot prompt their way out of a wet paper sack. The HN community is bathed in thoughtful, high quality writing 24/7/365, so I could see how a perception to the contrary might develop.


I find fun in using opencode and Claude to create projects but I can't find the energy to run the project or read the code.

Watching this program do stuff is more enjoyable then using or looking at the stuff produced.

But it doesn't produce code that looks or is designed the way I would normally. And it can't do the difficult or novel things.


I don't understand this sentiment at all.

For me it, feels more like a way integrate search results immediately into my code. Did you also feel threatened by stack overflow?

If you actually try it you'll find it's a multiplier of insight and knowledge.


You summed up my feelings pretty well, thanks for this counterpoint to usual comments in HN

> I've spent decades building up and accumulating expert knowledge and now that has been massively devalued. Any idiot can now prompt their way to the same software.

Do you like the craft of programming more than the outcomes? Now you are in a better position than ever to achieve things.


For me this is a painting vs photography thing

Painting used to be the main way to make portraits, and photography massively democratized this activity. Now everyone can have as many portraits as they want

Photography became something so much larger

Painting didn't disappear though


Compared to painting, software allows you to solve the problem once, then distribute the solution to the problem basically for free.

Market frictions cause the problem to be solved multiple times.

LLMs learn the solution patterns and apply it devaluing coming up with solutions in the first place.


Well, slightly different take: it's like telling an artist the world doesn't need another song about love, these already exist and can be re-heard as needed. Sharper formulated: a CRM or TODO-list is a solved problem in theory, right? tons of solutions even free ones to use out there. still look at what people are doing and selling - CRMs and TODO-list variations. because, in fact, its not solved, and always has certain tradeoffs that doesn't fit some people.

very hard to feel sorry of you when countless professions experienced the same in the past - only that they were poor / working class and not overpaid software engineers at FAANG.

also very egocentric & pessimist way to look at things. humankind is much better off when anyone can produce software and skilled experts will always be needed, just maybe with a slightly different skillset.


I urge you to actually try these tools. You will very quickly realize you have nothing to worry about.

In the hands of a knowledgeable engineer these tools can save a lot of drudge work because you have the experience to spot when they’re going off the rails.

Now imagine someone who doesn’t have the experience, and is not able to correct where necessary. Do you really think that’s going to end well?


Yeah, even just now I had to go and correct some issues with LLM output that I only knew were an issue because I have extensive experience with that domain. If I didn't have that I would not have caught it and it would have been a major issue down the line.

LLM's remove much of the drudgery of programming that we unfortunately sort of did to ourselves collectively.


No worries. True, you need to learn new skills to work properly with Claude. However, 30 yrs of coding experience come in handy to quickly detect it is going in the wrong direction. Especially on an architectural level you need to guide it.

Embrace


That's how progress looks like! We need less to produce more. The less includes less skill and human capital.

For me, LLMs just help a lot with overcoming writer's block and other ADHD related issues.


> Any idiot can now prompt their way to the same software.

Well, this is not what the main value of software actually is? Its not about prompting a one shot app, sure there will be some millionaires making an app super successful by coincidence (flapp bird, eg.), but in most cases software & IT engineering is about the context, integration, processes, maintenance, future development etc.

So actually you are in perfect shape?

And no worries: The one who werent good at writing code, will now fail because of administration/uptime/maintenance/support. They will fail just one step later.


Based on your comment you’re probably not a very good principal engineer ;)

Hence, you are back in the group of those who should benefit from LLMs. Following your own logic :)

Ps: please don’t take it seriously


I thought this was parody until the last sentence.

I think it’s important for you to understand that there were always way more people who loved programming than were able to work professionally as high-level coders. Sure, if you spent most of your working life writing code, you’d be very proficient. But for many, many others, they haven’t been able to spend the time developing those muscles. Modern LLMs really are a joyful experience for people who enjoy software creation but haven’t had the 10,000 hours.

I review PRs daily and people are pushing changes that have basic problems, not to talk about more serious flaws. The amount of code an engineer can produce is higher, but it's also less thought through.

There will be more code with lower quality. If you want to be valued for your expertise, you need to find niches where quality has to stay high. In a lot of the SaaS-world, most products do not require perfection, so more slop is acceptable.

Or you can accept the slop, grind out however more years you need to retire, and in the meanwhile find some new passion.


CC is not nearly that good. It may never be. It's an amplifier not a replacer.

On the plus side you're retiring soon... imagine if your were a graduate today

At least they're young enough to re-train into something else if they want. It's the mid-career devs who are flailing at the moment.

No offense but you sound more like a “principle coder”, not a principle engineer. At least in many domains and orgs, Most principal engineers are already spending most their time not coding. But -engineering- still take sip much or most of their time.

I felt what you describe feeling. But it lasted like a week in December. Otherwise there’s still tons of stuff to build and my teams need me to design the systems and review their designs. And their prompt machine is not replacing my good sense. There’s plenty of engineering to do, even if the coding writes itself.


I make documentation and diagrams for myself rather than writing code much of the time

I think that the biggest difference is between people who mostly enjoy the act of programming (carefully craft beautiful code; you read and enjoyed "Programming Pearls" and love SICP), vs the people who enjoy having the code done, well structured and working, and mostly see the act of writing it as an annoying distraction.

I've been programming for 40 years, and I've been on both sides. I love how easy it is to be in the flow when writing something that stretches my abilities in Common Lisp, and I thoroughly enjoy the act of programming then. But coding a frontend in React, or yet another set of Python endpoints, is just necessary toil to a desired endpoint.

I would argue that people like you are now in the perfect position to help drive what software needs writing, because you understand the landscape. You won't be the one typing, but you can still be the one architecting it at a much higher level. I've found enjoyment and solace in this.


> Any idiot can now prompt their way to the same software

If you really think it's the reality, then your expert knowledge is not that good to begin with.


I'm not sure why you feel devalued or let down, LLM code is a joke and will be a thing of the past after everyone has had their production environment trashed for the nth time by "AI."

Completely the opposite experience here! I am a tech lead with decades of experience with various programming languages.

When it comes to producing code with an llm, most noobs get stuck producing spaghetti and rolling over. It is so bad that I have to go prompt-fix their randomly generated architecture, de-duplicate, vectorize and simplify.

If they lack domain knowledge on top of being a noob it is a complete disaster. I saw llm code pick a bad default (0) for a denominator and then "fix" that by replacing with epsilon.

It isn't the end, it is a new beginning. And I'm excited.


Really? I love LLMs because I can't stand the process of taking the model in my brain and putting it in a file. Flow State is so hard for me to hit these days.

So now I spec it out, feed it to an LLM, and monitor it while having a cup of tea. If it goes off the rails (it usually does) I redirect it. Way better than banging it out by hand.


It's only going to get harder to achieve if you keep letting your skills and resoning abilities rot from LLM reliance.

From my point of view, having the llm as co-pilot is like having the junior engineer that the team would never justify the budget to hire. I get quite a bit more done when I can assign the tool a task to work on, work on something else in the meantime, and come back in 5 or 10 minutes to check on its progress and make adjustments.

There are many aspects of software engineering that are fun, but the pure mechanical part gets sold quickly; there are only but so many times you can type "emplace" and feel fulfilled. I'm finding that co-pilot is extremely good at that part.


I think you've got this backwards!

I've been working with computers since an Apple ][+ landed in our living room in the early 80s.

My perspective on what AI can do for me and for everyone has shifted dramatically in the last few weeks. The most recent models are amazing and are equipping me to take on tasks that I just didn't have the time or energy for. But I have the knowledge and experience to direct them.

I haven't been this enthused about the possibilities in a long time.

This is a huge adjustment, no doubt. But I think if I can learn to direct these tools better, I am going to get a lot done. Way more than I ever thought possible. And this is still early days!

Just incredible stuff.


> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.

I consider myself to have been a 'pretty good' programmer in my heyday. Think 'assembly for speed improvements' good.

Then came the time of 'a new framework for everything, relearn a new paradigm every other week. No need to understand the x % 2 == 0 if we can just npm an .iseven()' era ... which completely destroyed my motivation to even start a new project.

LLMs cut the boilerplate away for me. I've been back building software again. And that's good.


Indeed, and I noticed companies now are focusing on hiring coops, paying them peanuts and just use AI, and have maybe one senior and one wrangler (engineering/project manager), that’s basically what I have noticed what neo-teams are.

It is weird because I am the opposite. The symbols were never the objective for me but instead how they all fit together.

Now I am like a perfect weapon because I have the wisdom to know what I want to build and I don't have to translate it to an army of senior engineers. I just have Github Copilot implement it directly.


> Any idiot can now prompt their way to the same software

I have been thinking about the "same software"

Because I remember seeing Sonnet 4.5 and I had made comments that time as well that I just wanted AI to stop developing more as the more it develops, the more harm to the economy/engineers it would do than benefit in totality.

It was good enough to make scripts, I could make random scripts/one-off projects, something which I couldn't do previously but I used to still copy-paste it and run commands and I gave it the language to choose and everything. At that time, All I wanted was the models getting smaller/open source.

Now, I would say that even an Idiot making software with AI is gonna reach AI fatigue at one point or another and it just feels so detached with agents.

I do think that we would've been better off in society if we could've stopped the models at sonnet 4.5. We do now have models which are small and competitive to sonnet (Qwen,GLM,[kimi is a little large])


In my experience, the truly best in class have gone from being 10x engineers to being 100x engineers, assuming they embrace AI. It's incredible to watch.

I wouldn't say I'm a 10x-er, but I'm comfortable enough with my abilities nowadays to say I am definitely "above average", and I feel beyond empowered. When I joined college 15 years ago, I felt like I was always 10 steps ahead of everyone else, and in recent years that feeling had sort of faded. Well, I've got that feeling back! So much of the world around me feels frozen in place, whereas I am enjoying programming perhaps as much as when I learned it as a little kid. I didn't know I MISSED this feeling, but I truly did!

Everything in my daily life (be it coding or creating user stories — who has time to use a mouse when you can MCP to JIRA/notion/whatever?) is happening at an amazing speed and with provable higher levels of quality (more tests, better end-user and client satisfaction, more projects/leads closed, faster development times, less bug reports, etc.). I barely write lines of code, and I barely type (often just dictate to MacWhisper).

I completely understand different people like different things. Had you asked me 5 years ago I probably would have told you I would be miserable if I stopped "writing" code, but apparently what I love is the problem solving, not the code churning. I'm not trying to claim my feelings are right, and other people are "wrong" for "feeling upset". What is "right" or "wrong" in matters of feelings? Perhaps little more than projection or a need for validation. There is no "right" or "wrong" about this!

If I now look at average-to-low-tier-engineers, I think they are a mixed bag with AI on their hands. Sometimes they go faster and actually produce code as good as or better than before. Often, though, they lack the experience, "taste" or "a priori knowledge" to properly guide LLMs, so they churn lots of poorly designed code. I'd say they are not a net-positive. But Opus 4.6 is definitely turning the tide here, making it less likely that average engineers do as much damage as before (e.g. with a Sonnet-level model)

On top of this divide within the "programming realm", there's another clear thing happening: software has finally entered the DIY era.

Previously, anyone could already code, but...not really. It would be very difficult for random people to hack something quickly. I know we've had the terms "Script kiddies" for a long time, but realistically you couldn't just wire your own solution to things like you can with several physical objects. In the physical world, you grab your hammer and your tools and you build your DIY solutions — as a hobby or out of necessity. For software...this hadn't really been the case....until now! Yes, we've had no-code solutions, but they don't compare.

I know 65 year olds who have never even written a line of code that are now living the life by creating small apps to improve their daily lives or just for the fun of it. It's inspiring to see, and it excites me tremendously for the future. Computers have always meant endless possibilities, but now so many more people can create with computers! To me it's a golden age for experimentation and innovation!

I could say the same about music, and art creation. So many people I know and love have been creating art. They can finally express themselves in a way they couldn't before. They can produce music and pictures that bring tears to my eyes. They aren't slop (though there is an abundance of slop out there — it's a problem), they are beautiful.

There is something to be said about the ethical implications of these systems, and how artists (and programmers, to a point?) are getting ripped off, but that's an entirely different topic. It's an important topic, but it does not negate that this is a brand new world of brand new artists, brand new possibilities, and brand new challenges. Change is never easy — often not even fair.


I know that your post has lots of comments, but I'd like to weigh in kindly too.

> I've spent decades building up and accumulating expert knowledge and now that has been massively devalued.

Listen to the comments that say that experience is more valuable than ever.

> Any idiot can now prompt their way to the same software.

No they cannot. You and an LLM can build something together far more powerful and sophisticated than you ever could have dreamt, and you can do it because of your decades of experience. A newbie cannot recognize the patterns of a project gone bad without that experience.

> I feel depressed and very unmotivated and expect to retire soon.

Welcome to the industry. :) It happens. Why not take a break? Work on a side project, something you love to do.

> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.

Once upon a time painters and illustrators were not "artists", but archivists and documenters. They were hired to archive what something looked like, and they were largely evaluated on that metric alone. When photography took that role, painters and illustrators had to re-evaluate their social role, and they became artists and interpreters. Impressionism, surrealism, conceptualism, post-modernism are examples of art movements that, in my interpretation, were still attempting to grapple with that shift decades, even a century later.

Today, we SWE are grappling with a very similar shift. People using LLMs to create software are not poor coders any more (or less) than photographers were poor painters. Painters and illustrators became very valuable after the invention of photography, arguably more valuable socially than before.


What I keep hearing is that the people who weren't very good at writing software are the ones reluctant to embrace LLMs because they are too emotionally attached to "coding" as a discipline rather than design and architecture, which are where the interesting and actually difficult work is done.

Really? To me it seems that quite the opposite is true - people who were never very good at writing code are excited about LLMs because suddenly they can pretend to be architects without understanding what's happening in the codebase.

Same as with AI-art, where people without much drawing skills were excited about being able to make "art".


Perhaps you are both right. People who see coding as a means to an end enjoy LLMs while people who saw it as the most enjoyable part don’t.

This is more accurate, I've written enough code in my life to never really want to do it again ....but I still love creating (code was merely the way to do it) so LLMs help with my underlying passion.

On the bright side, working in tech between 2006 and 2026 means you should be extremely wealthy and able to retire comfortably.

In SV probably. As a lead FE dev with 14 yoe in Munich I‘m at 85k€, thats not even enough to pay off a loan for a house around here.

Uh if you worked for a top company or something. Most tech workers have made relatively ordinary salaries the last 20 years.

Cries in federal employee wages

Won't take long until the apologist come in defending the billionaires, how they create businesses and value and prop up the economy with their spending yadi yadi.

When the process that skews the wealth distribution has run this course, the billionaires and their cronies own everything and you have nothing do you think they'll show up to pay your child's education or your health care or your elderly care? They won't.

They'll kick you to the curb and remove democracy since any real democracy is a direct threat to them. Then they'll continue their lavish parties on their yachts while you and your family go hungry in the slums.


”Again I tell you, it is easier for a camel to go through the eye of a needle than for someone who is rich to enter the kingdom of God.” - Christ

You know, who said our lord doesn’t have a sense of humor? He could have said it any other way lol.

I just don’t understand how we can not have a category in the DSM for wealth-fixation, because after … I don’t know, $100m, you have to be mentally ill to even be talking about, let alone pursuing, money. Shout out to Christ for being a radical pioneer on this issue.

Tech needs Jesus in ways tech is too corrupted to understand.


> Tech needs Jesus in ways tech is too corrupted to understand.

Fortunately Peter Thiel is really into Christianity, so we're good!


Oh yeah, the anti christ. Yeah, that’s been warned about as well.

Why are you assuming we're all dependent on the billionaires in the first place?

I pay for my child's education and healthcare myself, and expect to continue to so whether Bezos is a trillionaire, a pauper, or anything in between. It ultimately has very little impact on my life.


It has very little impact on your life... for now. Maybe you're well off for this not concern you but as a direct example how about Amazon warehouse worker who is squeezed so hard for profits that they live in a car, work 7 days a week without healthcare or vacations or anything? Them being poor is what enables jeff to be rich.

Generally speaking whether you realize this or not the economic system creates a competition between entities. And larger richer entities will subsume assimilate and destroy smaller entities when they're looking for that eternal growth with fixed resources.

The argument to this always is that "it's not a zero sum game". Except that in practice it is. Economies are growing tiny few percent per year perhaps while the rich people are growing their wealth 10-20% per year. This is only possible by changing the wealth distribution making it effectively a zero sum game.

That means wealthy individuals will outcompete poorer individuals for all resources such as housing, education, health care. Everything will be used to extract maximum wealth from the society until there's nothing more to take.


As of March 2026, the average annual salary for a full-time Amazon warehouse worker in the U.S. is approximately $39,183–$47,415 annually, or roughly $18.84–$23.00 per hour. Total compensation, including benefits, can exceed $30 an hour, with many positions paying over $22 hourly for entry-level, plus potential for increased pay based on location and experience.

Don't compare this year to last year. Compare this year to 10 years ago. To 20 years ago. Then say it's a zero sum game. Ask yourself if you would switch places with John D. Rockefeller. I would not.


> how about Amazon warehouse worker who is squeezed so hard for profits that they live in a car, work 7 days a week without healthcare or vacations or anything? Them being poor is what enables jeff to be rich.

Amazon warehouse workers are paid enough to afford shelter (especially if they are working 7 days a week), or they are welcome to find a better job.

> Generally speaking whether you realize this or not the economic system creates a competition between entities. And larger richer entities will subsume assimilate and destroy smaller entities when they're looking for that eternal growth with fixed resources.

Yes, capitalism is competitive; that's the point. If a larger entity can perform better than a smaller one, then the smaller one doesn't need to exist.

> The argument to this always is that "it's not a zero sum game". Except that in practice it is. Economies are growing tiny few percent per year perhaps while the rich people are growing their wealth 10-20% per year. This is only possible by changing the wealth distribution making it effectively a zero sum game.

It's not a zero sum game, and you just pulled those numbers out of your ass.

> That means wealthy individuals will outcompete poorer individuals for all resources such as housing, education, health care. Everything will be used to extract maximum wealth from the society until there's nothing more to take.

One person can't consume so much healthcare, shelter, or education that it prevents others from accessing it. Claiming otherwise is absurd.


except for when a hurricane comes, then they will be left to die for "profits"

You mean the warehouse that was built to code and was hit by a tornado?

Is Bezos supposed to use his billions to build some sort of machine to control the weather?


Aggregating so much power in a single person is bad for society. It allows individuals to remake institutions.

We need to reverse bad 1950s era tax code changes. All compensation needs to be taxable at the time of labor exchange. We can't have taxes on labor exchanged but also optionally we can pay you in 'other units' that are more valuable than wages and those aren't taxable or are taxed at a lower rate.

It's wild that people that get this special exception so that their labor isn't fairly taxed in the way the average person is are now happy that AI is eliminating busywork jobs and now feel randos working average jobs are somehow exploiting the system.


“Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.”

— John Steinbeck


“If you can convince the lowest white man he's better than the best colored man, he won't notice you're picking his pocket. Hell, give him somebody to look down on, and he'll empty his pockets for you.” - LBJ

If anyone wonders why class consciousness seems to be impossible in the US, this and the parent comment lay it out. The belief in American exceptionalism and capitalism as a moral force and the defense of systemic racial hierarchies in a low trust society override all other concerns.


Worse than that. The mods will tell you that broad claims about class war are not "curious conversation" and say that they are inappropriate.

> They'll kick you to the curb and remove democracy since any real democracy is a direct threat to them.

Not a threat, these people rarely feel truly threatened, but an obstruction.


Plot twist. The memory bit flip checking code was actually buggy and contained UB.

No, seriously did you actually verify the code for correctness before relying on it's results?


As a non German but someone who has lived there for 10 years my take is that its cultural part of being Germans. They always seem to gravitate towards making things just a little bit more difficult.

Sthe keyboards are not QWERTY. They're QWERTZ.

Traffic lights are in excessive combinations of green/yellow/red. Sometimes you might have yellow/red light sometimes only red.

Roads often have "end of speed limit" sign. But the speed limit might have been temporary for construction. So now you have to remember the original speed limit. Why not post the actual speed limit again?

Stuff like this is absolutely everywhere in the German society for apparently no reason.

EDIT: Adding the best for last.. in Germany when you have an IBAN bank account (you know the I stands for international) you must have German international bank account, or else it will be rejected by everyone and everything.


I don't get any of your examples.

There are many different keyboard layouts, at least nowadays there is only one German layout. For Spanish there are 2 different layouts which are both actively sold.

The meaning of yellow on traffic lights is no problem, you'll see it for no longer than 2 seconds. Unless it flashes yellow which means that the traffic light is shut off, then "right before left" applies. Some countries only have a green light, no red and no yellow. Now that is a problem because if the light is off because you a) don't see it, you have to know that it is there, and b) don't know if it is operational.

The end of a speed limit indication means the same as no speed limit indication. The lawful limit for the type of road applies, 50 in the city, 100 for rural roads, unlimited for Autobahn. Thats why on the Autobahn there will be a speed limit indication after every on-ramp, if there is no speed limit sign then it is unlimited.


With respect, you’ve given the typical German response to virtually any outsider’s criticism anything German: no, it’s simple, you’re just not doing/understanding it correctly, just learn a, b and c, and then do d, e, f and g.

(Except Deutsche Bahn; no-one argues at criticism there. It’s quite refreshing!)


I know how the traffic lights work, point was that there are just excessively many of different combinations in Germany rather than just a single standard green/yellow/red.

"The end of a speed limit indication means the same as no speed limit indication. The lawful limit for the type of road applies, 50 in the city, 100 for rural roads,"

See, that's already wrong because the "Landstraße" may be 70, 80 or 100 depending on the exact road. If there's construction you might have a lower speed for the construction and then "end of limitation" at which point you have to remember whether the road is 70, 80 or 100.

Another example, if you have a Landstraße that is 100km/h, but you have a section that is 80km/h which has construction that has 50km/h, after construction you see "end of limit" what's the speed you're allowed to drive? 80 or 100? If you just had the speed limit sign all this confusion would simply not exist.


France does the same with regards to speed limits. There are also signs telling you the speed limit hasn't changed, or telling you to watch your speed, but without giving you the speed limit!

You are supposed to guess the speed limit from the size and shape of the road too. When I ask, almost nobody knows what is the speed of a road, they just wing it.

By law the speed limit has to be posted before an automatic speed trap (they are everywhere in France). Essentially training everybody that speeds limits are only for avoiding the speed trap "tax", but don't matter otherwise.


That explains so much about the roads and signage in Tahiti.

I can't speak to Germany, but we also have "end of speed limit" signs in California, and here they definitely mean what the other commenters have said, i.e. the basic speed rule applies. Just based on reading these comments, the German rules seem to be the same here, so I would very much suspect that that in your example the speed limit is unambiguously 100km/hr after the "end of limit" sign.

German here. Some of it I can agree with. The traffic light though is very simple. Yellow means “it was green and will turn red”. Red+yellow means “it was red and will turn green”.

It’s this way in the entire country. There are many things I can get upset with in Germany (I moved abroad 10y ago and have an outsider’s perspective by now) but the traffic light example to me just indicates you didn’t ask why certain things were the way they were.


"The traffic light though is very simple. Yellow means “it was green and will turn red”. Red+yellow means “it was red and will turn green”."

That was not the point, that's the standard way the lights will work.

The point is that sometimes you have a light combination where actual light post only has yellow/red for example (no green light). Sometimes you have a light post that has red only (on, off).


I have never seen those anywhere in Germany. Are you sure you are talking about traffic lights?

After reading some parts of this thread, all I can recommend is the following: https://www.goodreads.com/quotes/539867-never-argue-with-an-...

(I'm sure there are various German things to criticize or to make fun of. Much of what I read here, however, says more about the (US?!) authors, though.)


It’s so German to blame the user!

In many cases in Germany "digitalization" means there's a PDF which you can download, print, fill out sign and scan and send copy of.

Then wait for the one person in the org that deals with those documents to come back from their 3 week vacation.

Yes, and email is just a way to exchange phone numbers.

You're actually not necessarily right.

At this point here in the south of Germany not having DB would be an improvement.

Now it exists and you might even try to take it just to be delayed and disappointed. You'll lose your money and they behave in their arrogant manner with impunity.

Their customer "service" will definitely tell you how it's your own fault for having had the nerve to actually try to take their train. You might have even carried some dirt to the train for Christs sake!

So yeah. Not having it would already be an improvement. You'd just shrug and move on and take an alternative transport. Even horse and carriage would be better.


I understand your frustration, and as I said they should improve (though that probably requires time and money). The SNCF shows that it is possible.

> Even horse and carriage would be better.

How does the existence of the DB prevent you from travelling with your horse and carriage? Your attitude doesn't seem very constructive, to be honest.


I expect that in the "post scarcity" world where the capital class doesn't need majority of human labor for anything most people will be priced out of everything. Including the basic necessities.

But sure, lets just keep automating ourselves out of jobs (and help other industries do it too) with no plan as to how to help all the displaced people.


That's a great idea the task bar could just shift to expose a link to sign up for azure/office/OneDrive/CoPilot subscription that the user misclicks on.

"Also - study the code of the likes of Carmack. Consider that he produced the likes of the quake engines in only a couple of years. Reflect long and hard on the raw simplicity of a lot of that code."

Also says something about the accumulation of complexity. At that time Carmack (and his team) were able to create a state of the art engine in a few years. Now consider the task today, if you were to create a state of the art engine today it'd take tremendously more work.


An interesting thing is I think a lot of it's caused purely by the graphical fidelity. For instance an animated character back then was just a key framed polygon soup - compare that with the convincing humanoids games have now.

And yet often the actual gameplay code itself may only be 2x to 3x more complicated then the days of old.

I think of counterstrike for instance - it's still just guys shooting guns in a constrained arena.


But you could create Quake.

Quake is peak graphics anyway. We hit diminishing returns after that ;)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: