Hacker Newsnew | past | comments | ask | show | jobs | submit | tripledry's commentslogin

I'm wondering how much value there is in a rewrite once you factor in that no one understands the new implementation as well as the old one.

Not only is it difficult to verify, but also the knowledge your team had of your messy codebase is now mostly gone. I would argue there is value in knowing your codebase and that you can't have the same level of understanding with AI generated code vs yours.


The point of a rewrite is to safely delete most of that arcane knowledge required to operate the old system, by reducing the operational complexity of it.

Also, why is Anthropic still hiring SWEs?

I get this, but also genuinely interested to know how to measure outputs. For me it's almost impossible to get it objectively right.

Maybe this doesn't apply to your case, but how would you measure outputs of say product development, or any data related project. Lot's of things don't have a good measure of output before the thing is done. Maybe your product / analysis improves profitability by 10x or maybe it was a flop and lost money.

Tangential, but I'm also seeing the quality of measures going down, with AI it seems that the number of [emails|code|analysis] produced is again a good measure.


> I get this, but also genuinely interested to know how to measure outputs.

Measuring outputs or inputs (hard work) is always hard. Did someone get the thing that was asked done both quickly and correctly? Do they do this consistently?

I also find inputs harder to measure because someone could be in the office 12 hours/day, but on Facebook the whole time. They could also just spin their wheels doing 'fake' work.


I spend some time going through what programmers wrote over the past years and many of them were rewarded for getting things done quickly with no complaints.. The more diligent ones probably didn't last since they got things done correctly which takes a lot more time and thought.

It's why I said quickly and correctly. I think it's a cop out to say someone was slow because they were building it correctly. Famously, the old space shuttle software was developed very slowly because it had to be 100% correct at all times. Most software does not need that level of correctness. Part of a SE's job is to understand that.

I pay a lot of attention when someone claims to have solved a problem I suspect to be NP-hard. There are a lot of possible explanations, for example they may have an incorrect measurement function or they may have chosen a simpler related problem that isn't really NP-hard, or both.

Fast, quality, cheap.

Pick two.


You don't have to pick two exclusively. They are all connected sliding scales. Part of engineering is figuring out where the slides need to be.

Plenty of things are made fast, at high quality, and cheaply.

Some things just aren't very hard to make.


For me both are true at the same time.

I vividly remember understanding how calculus works after watching some 3blue1brown videos on youtube, but once I looked at some exercises I quickly realized I was not able to solve them.

Similar thing happens with LLMs and programming. Sure I understand the code but I'm not intimately familiar with it like if I programmed it "old school".

So yes, I do learn more but I can't shake the feeling that there is some dunning kruger effect going on. In essence I think that "banging my head against the wall" while learning is a key part of the learning process. Or maybe it's just me :D


It's not just you. I feel the same thing, and I saw it in practice helping my son study for a chemistry test just last night. He had worked through a bunch of problems by following the steps in his notes and got the right answers, but couldn't solve them without the notes because his comprehension of why he was taking all the steps wasn't solid.

Once we addressed that, he did great solo. Working the mechanics of the problems with the notes helped, but it was getting independent understanding of the reason for each step that put everything together for him.


> It’s hard for humans to perceive the exponential, it will be slow then sudden.

True, but also there are perception biases that lead us to believe progress is exponential, even though it might as well be an S-curve.

I'm having a hard time finding the right terms, but I'm sure there is some bias to think that "the line goes up".


When someone at work talks about all software devs being replaced I link them to the Anthropic career pages.


Would love to be a fly on the wall for a couple of months to see what corporate CxO's actually do.

Surely I could do a mediocre job as a CxO by parroting whatever is hot on Linkedin. Probably wouldn't be a massively successful one, but good enough to survive 2 years and have millions in the bank for that, or get fired and get a golden parachute.

(half) joking - most likely I'm massively trivializing the role.


Funny enough, the author of this blog post wrote another one on exactly that topic, entitled "What do executives do, anyway?"[1]. If you read it, you'll find it's written from quite an interesting perspective, not quite "fly on the wall," but perhaps as close as you're going to get in a realistic scenario.

[1]: https://apenwarr.ca/log/20190926


"Surely I could do a mediocre job as a CxO by parroting whatever is hot on Linkedin"

Having worked for a pretty decent CIO of a global business I'd say his main job was to travel about speak to other senior leaders and work out what business problems they had and try and work out, at a very high level, how technology would fit into that addressing those problems.

Just parroting latest technology trends would, I suspect, get you sacked within a few weeks.


A charitable explanation for what CxOs do is that they figure out their strategic goals and then focus really hard on ways to herd cats en masse to achieve the goals in an efficient manner. Some people end up doing a great job, some do so accidentally, other just end up doing a job. Sometimes parroting some linkadink drivel is enough to keep the ship on course - usually because the winds are blowing in the right direction or the people at the oars are working well enough on their own.


What's a good alternative if I want to self host and convenience?

I have some hobby sites I host on a VM and currently I use docker-compose mainly because it's so "easy" to just ssh into the machine and run something like "git pull && docker-compose up" and I can have whatever services + reverse proxy running.

If I were to sum up the requirements it would be to run one command, either it succeeds or fails in it's entirety, minimal to no risk of messing up the env during deployment.

Nix seems interesting but I don't know how it compares (yet to take a good look at it).


I don't know, that set of requirements sounds like containers are a good fit. I don't have an alternative for you. I would just ssh into the server and run the commands needed to update/start the services; it wouldn't be one command and it is not impossible to mess up.

I will say that consuming other people's services that I don't intend to develop on is easier with containers. I use podman for my jellyfin and Minecraft servers based on someone else's configs. My only issue with them is the complexity during development.


I have some hobby sites I host on a raspberry pi and currently I use make mainly because it's so "easy" to just ssh into the machine and run something like "pip install thing.whl && sudo systemctl restart thing" and I can have whatever services + reverse proxy running.


Something I've been thinking about lately is if there is value in understanding the systems we produce and if we expected to?

If I can just vibe and shrug when someone asks why production is down globally then I'm sure the amount of features I can push out increases, but if I am still expected to understand and fix the systems I generate, I'm not convinced it's actually faster to vibe and then try to understand what's going on rather than thinking and writing.

In my experience the more I delegate to AI, the less I understand the results. The "slowness and thinking" might just be a feature not a bug, at times I feel that AI was simply the final straw that finally gave the nudge to lower standards.


>if I can just vibe and shrug when someone asks why production is down globally

You're pretty high up in the development, decision and value-addition chain, if YOU are the responsible go-to person for these questions. AI has no impact on your position.


Naa, I'm just a programmer. Experience may vary depending on company and country, for me this has been true from tiny startups to global corporations.

Tangential, I don't even know what "responsible" in the corporate world means anymore, it seems to me no one is really responsible for anything. But the one thing that's almost certain is that I will fix the damn thing if I made it go boom.


> All anyone cares about is feature release velocity.

And at the same time it's impossible to convince tech illiterate people that reducing complexity likely increases velocity.

Seemingly we only get budget to add, never to remove. Also for silver bullets, if Big Tech promises a [thing] you can pay for that magically resolves all your issues, management seems enchanted and throws money at it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: