I have seen code bases that are amazing. I have seen ones that look bad, but work. About a year and half ago I saw my first fairly large scale fully AI generated project and it filled me with dread. It looked like the figma, which is very impressive. But under the hood it was bizarre. It was like those sci-fi movie tropes of teleportation where one of the people teleport and the destination coordinates are wrong and the merge with a tree or rock or whatever. There was so much unused junk that had nothing to do with anything. Ugh. My task to was to figure out why the initial render took so long. (unsurprisingly it was loading all the data then rendering, so with toy dev loads it was fine, in production nightmare and getting worse). So I just got to it and made some progress. But the new grad (who thought I was a dinosaur (might be right)) who made it was working in parallel and reintroducing more slop. So it became this Sisyphean task where I am speeding things up (true dinosaur so measuring things) and they were cutting and pasting and erasing the gains.
I have always found management to be just silly exercise in day full of meetings. I like to make things. I could retrain, but, the salary drop would be very hard. Hope to find one last gig and have enough to retire. I still get that spark of joy when all the tests pass.
I heard on some podcast that there is such a thing as: "Microsoft Excel World Championship" and someone named Diarmuid Early won it last year. I would pay $2.56 to watch an Excel battle between a slop skipper and him. Money is on him. I am team John Henry.
Code can definitely only sort of work: only works on the happy path, only works on the computer it was developed on, only works for some versions of some dependencies, only works when single threaded, only works when the network is fast enough, only works for a single user at a time etc etc etc.
Software engineering is way more of a social practice than you probably want to believe.
Why is the code like that? How are people likely to use an API? How does code change over time? How can we work effectively on a codebase that's too big for any single person to understand? How can we steer the direction of a codebase over a long timescale when it's constantly changing every day?
Yes that is very true but social science is more of a social practice than computer science
If you run your organization badly, you'll run into problems sooner, than if you are in social science, where you just have to say all the buzzwords and they'll just rubberstamp you true
If you are arguing that my point is that computer science would be 100% falsifiable and social science is 0% falsifiable then you're argument is a bit of a straw man
> Why is the code like that? How are people likely to use an API? How does code change over time? How can we work effectively on a codebase that's too big for any single person to understand? How can we steer the direction of a codebase over a long timescale when it's constantly changing every day?
At which point you are studying project management theory, or whatever you call it
this is wrong. I would argue the difference between a junior dev/intern and a senior engineer is that while both can write code that works, the juniors find local maximas, like solutions that work, but can't scale, or wont be very easy to integrate/add features on top/maintain etc.
This happens in maths, biology, in all science fields. Experience is partly the ability to take decisions between options that both work.
This is why coding assistants are amazing at executing things you are clear on what you want to do, but can't help (yet) on big picture tweaks
Right I'm not trying to be argumentative here, I see where you are coming from right.
My point being that it's quite easy to demonstrate that it can't scale, by running an experiment.
Meaning that you could quite easily BS your way through that by just agreeing with whatever the status quo is.
Whereas in social science you can't do an empirical experiment, so you're epistemologically on much much more shakier ground
> This happens in maths, biology, in all science fields
Right but I wrote social science and not maths or biology.
For instance if someone where to say that due to Hegelian Dialeticts and gender critical theory, in the future women are destined to rule the world, this is a good thing, and this will lead to the abolishment of racial inequality and exploitation through capitalism
how do you prove that?
in comparison if the problem is that your software isn't efficient when there are over 100 instance, you can prove that by spinning up 100 instances?
You can't clone earth and force all the inhabitants to enact ideologically pure race critical theory, and then ask the inhabitants in the control group to try out nazism, wait for a while and then use that to prove that one or the other is the best way can you?
1) solving the problem myself is the fun part of the job
2) writing emails or whatever, I want it to be my voice, not some bland average of everything slop
3) I can't sppeell big word because auto-correct does that. I don't want to lose logic and thinking to "auto-think ai™"
4) is the output trustworthy?
For four, imagine some malevolent entity (North Korea or insert whomever you hate) procedurally generates thousands of tutorials (with slight cosmetic variations to trick the crc checks) with unique URLs. The tutorials teach some tricky thing like SSO. The tutorials reference some library (or tricky math heavy function) that has been altered by black hat hackers. The LLM reads all the urls and that affects its output. Then low knowledge "vibe coders" just blindly cut and paste their way to victory. voila, security nightmare.
It doesn't have to be code, it could be insults to political leaders (some emphasized, some excepted). Political policies. LLM lose money so start to sell out to advertisers (you pay, we push your product).
When I am procrastinating on other sites, you see it everywhere. Someone posts something the first comment is "grok explain the post." Its worse than orwellian, they had only the 5-minutes of hate monitoring. People are offloading their thinking to a handful of companies. Also, people apparently really open to the pseudo humans, will those personal private thoughts be resold to make up for cash burn?
> writing emails or whatever, I want it to be my voice
Right?!
One of the ads I saw for AI assisted comms recently was an example of making copy for a newsletter for a cupcake shop, who have a new flavour. You tell it that and it spins a whole newsletter for you.
All this tells me is that the information content of your letter is utterly trivial. I don’t want that newsletter.
I did CS and have been writing code for 20+ years. For the past two years, helping clients adopt LLMs into their business logic. It has lots of good use cases.
But, I can recognize when it makes mistakes due to my years of manual learning and development. If someone uses LLMs through their entire education and life experience, will they be able to spot the errors? Or will they just keep trying different prompts until it works? (and not know why it works)
Its like the auto-correct spell checker. I can't spell lots of words, I can just get close enough to the right spelling until it fixes it for me. I am a bit nervous about having the same handicap with LLMs taking that away from me in: thinking, logic and code.
reply