I keep seeing this argument over and over again, and I have to wonder, at what point do you accept that maybe LLM's are useful? Like how many people need to say that they find it makes them more productive before you'll shift your perspective?
...and my comment clearly isnt talking about that, but at the suggestion that its useless to write code with an LLM because you'll end up rewriting 50% of it.
If everyone has an opinion different to mine, I dont instantly change my opinion, but I do try and investigate the source of the difference, to find out what I'm missing or what they are missing.
The polarisation between people that find LLMs useful or not is very similar to the polarisation between people that find automated testing useful or not, and I have a suspicion they have the same underlying cause.
You seem to think everyone shares your view, around me I see a lot of people acknowledging they are useful to a degree, but also clearly finding limits in a wide array of cases, including that they really struggle with logical code, architectural decisions, re-using the right code patterns, larger scale changes that aren’t copy paste, etc.
So far what I see is that if I provide lots of context and clear instructions to a mostly non-logical area of code, I can speed myself up about 20-40%, but only works in about 30-50% of the problems I solve day to day at a day job.
So basically - it’s about a rough 20% improvement in my productivity - because I spend most of my time of the difficult things it can’t do anyway.
Meanwhile these companies are raising billion dollar seed rounds and telling us that all programming will be done by AI by next year.
That's a tool, and it depends what you need to do. If it fits someone need and make them more productive, or even simply enjoy more the activity, good.
Just because two people are fixing something on the whole doesn't mean the same tool will hold fine. Gum, pushpin, nail, screw,bolts?
The parent thread did mention they use LLM successfully in small side project.
They say it’s only effective for personal projects but there’s literally evidence of LLMs being used for what he says can’t be used. Actual physical evidence.
It’s self delusion. And also the pace of AI is so fast he may not be aware of how fast LLMs are integrating into our coding environments. Like 1 year ago what he said could be somewhat true but right now what he said is clearly not true at all.