It’s wild to me that we both see people like Jensen as great while also tolerating public whining of the sort in the linked article. Don’t get me wrong, there are people who are far worse! But why do we put up with a billionaire whining that people are critical of what they make? At that scale it is guaranteed to have haters. It’s just statistics, man.
Indices are fine. Fixating on the “right” shape of the solution is your hang-up here. Different languages want different things. Fighting them never ends well.
The right job for a person depends on whether they can rise above the specific flavor of pain that the job dishes out. BigTech jobs strike me as having an inextricable political element to them: so you enjoy jockeying for titles and navigating constant reorgs?
The pay is nice but I find myself…remarkably unenvious as I get older.
Big companies are political and re-orgs lead to layoffs. Startups are a constant battle for funding and go out of business. Small companies mean a lot of exposure to bad management and budget issues. Charities are highly regulated and audited environments. Government jobs have no perks and entrenched middle management.
Every type of work has its idiosyncrasies, which people will either get on with or not. Mentioning one without the others is a bit disingenuous, or its whatever the opposite of the grass-is-greener bias is.
I'm not sure what your take is, but this reads like goalpost shifting.
If one of the biggest orgs that practically mandates some amount of LLM use cannot surface productivity gains from them after using them for several years, then that speaks volumes.
This framing neatly explains the hubris of the influencer-wannabes on social media who have time to post endlessly about how AI is changing software dev forever while also having never shipped anything themselves.
They want to be seen as competent without the pound of flesh that mastery entails. But AI doesn’t level one’s internal playing field.
> What are those execs bringing to the table, beyond entitlement and self-belief?
The status quo, which always require an order of magnitude more effort to overcome. There's also a substantial portion of the population that needs well-defined power hierarchies to feel psychologically secure.
Alternate take: what agents can spit out becomes table stakes for all software. Making it cohesive, focused on business needs, and stemming complexity are now requirements for all devs.
By the same token (couldn’t resist), I also would argue we should be seeing the quality of average software products notch up by now with how long LLMs have been available. I’m not seeing it. I’m not sure it’s a function of model quality, either. I suspect devs that didn’t care as much about quality hadn’t really changed their tune.
how much new software do we really use? and how much can old software become qualitatively better without just becoming new software in different times with a much bigger and younger customer base?
I misunderstood two things for a very long time:
a) standards are not lower or higher, people are happy that they can do stuff at all or a little to a lot faster using software. standards then grow with the people, as does the software.
b) of course software is always opinionated and there are always constraints and devs can't get stuck in a recursive loop of optimization but what's way more important: they don't have to because of a).
Quality is, often enough, a matter of how much time you spent on nitpicking even though you absolutely could get the job done. Software is part of a pipeline, a supply chain, and someone is somehow aware why it should be "this" and not better or that other version the devs have prepared knowing well enough it won't see the light of day.
Honestly, in many ways it feels like quality is decreasing.
I'm also not convinced it's a function of model quality. The model isn't going to do something if the prompter doesn't even know. It does what the programmer asked.
I'll give a basic example. Most people suck at writing bash scripts. It's also a common claim as to LLMs utility. Yet they never write functions unless I explicitly ask. Here try this command
curl -fsSL https://claude.ai/install.sh | less
(You don't need to pipe into less but it helps for reading) Can you spot a fatal error in the code where when running curl-pipe-bash the program might cause major issues? Funny enough I asked Claude and it asked me this
Is this script currently in production? If so, I’d strongly recommend adding the function wrapper before anyone uses it via curl-pipe-bash.
The errors made here are quite common in curl-pipe-bash scripts. I'm pretty certain Claude would write a program with the same mistakes despite being able to tell you about the problems and their trivial corrections.
The problem with vibe coding is you get code that is close. But close only matters in horseshoes and hand grenades. You get a bunch of unknown unknowns. The classic problem of programming still exists: the computer does what you tell it to do, not what you want it to do. LLMs just might also do things you don't tell it to...
reply