> LLMs will be the same. At the moment people are still mostly playing with it, but pretty soon it will be "hey why are you writing our REST API consumer by hand? LLM can do that for you!"
Not everyone wants to be a "prompt engineer", or let their skills rust and be replaced with a dependency on a proprietary service. Not to mention the potentially detrimental cognitive effects of relegating all your thinking to LLMs in the long term.
I recall hearing a lot of assembly engineers not wanting to let their skills rust either. They didn't want to be a "4th gen engineer" and have their skills replaced by proprietary compilers.
Same with folks who were used to ftp directly into prod and used folders instead of source control.
Look, I get it, it's frustrating to be really good at current tech and feel like the rug is getting pulled. I've been through a few cycles of all new shiny tools. It's always been better for me to embrace the new with a cheerful attitude. Being grumpy just makes people sour and leave the industry in a few years.
This is a different proposition, really. It’s one thing to move up the layers of abstraction in code. It’s quite another thing to delegate authoring code altogether to a fallible statistical model.
The former puts you in command of more machinery, but the tools are dependable. The latter requires you to stay sharp at your current level, else you won’t be able to spot the problems.
Although… I would argue that in the former case you should learn assembly at least once, so that your computer doesn’t seem like a magic box.
> It’s quite another thing to delegate authoring code altogether to a fallible statistical model.
Isnt this what a compiler is really doing? JIT optimizes code based on heuristics, it a code path is considered hot. Sure, we might be able to annotate it, but by and large you let the tools figure it out so that we can focus on other things.
But the compiler’s heuristic optimization doesn’t change the effects of the code, does it? Admittedly I’m no compiler expert, but I’ve always been able to have 100% trust that my compiled code will function as written.
I agree that not everyone wants to be. I think OPs point though is the market will make “not being a prompt engineer” a niche like being a COBOL programmer in 2025.
I’m not sure I entirely agree but I do think the paradigm is shifting enough that I feel bad for my coworkers who intentionally don’t use AI. I can see a new skill developing in myself that augments my ability to perform and they are still taking ages doing the same old thing. Frankly, now is the sweet spot because the expectation hasn’t raised enough to meet the output so you can either squeeze time to tackle that tech debt or find time to kick up your feet until the industry catches up.
Not everyone wants to be a "prompt engineer", or let their skills rust and be replaced with a dependency on a proprietary service. Not to mention the potentially detrimental cognitive effects of relegating all your thinking to LLMs in the long term.