Hacker Newsnew | past | comments | ask | show | jobs | submit | geraneum's commentslogin

In other engineering fields, no one calculates the numbers for building a plane or a dam by hand anymore. They rely heavily on software for design, simulations, etc. throughout the entire development cycle. Yet, starting in university, those engineers still learn to do those calculations by hand so they comprehend the underlying principles.

IMO, that’s what we should do as software engineers. The idea of letting AI "do the thinking" for you is a bad idea. Sure, it can trivially write a sort function for you. Let it! But you still need to understand how that sort function works. If having the tool was a substitute for understanding the fundamentals, anyone with access to Catia, etc. could design a working airplane.


> poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.

That’s a bit… handwavy…!


> In short: the implementation was performed in a very similar way to how a human programmer would do it, and not outputting a complete implementation from scratch “uncompressing” it from the weights.

> Instead, different classes of instructions were implemented incrementally, and there were bugs that were fixed…

Not sure the author fully grasps how and why LLM agents work this way. There’s a leap of logic here: the agent runs in a loop where command outputs get fed back as context for further token generation, which is what produces the incremental human like process he’s observing. It’s still that “decompression” from the weights, still the LLM’s unique way of extracting and blending patterns from training data, that’s doing the actual work. The agentic scaffolding just lets it happen in many small steps against real feedback instead of all at once. So the novel output is real, but he’s crediting the wrong thing for it.


> Developers with decades of experience still make basic security holes.

You see this type of template response copy pasted basically under any post/comment of this kind.

I think at the end of the day we’ll be able to look back and see what/who fared better, based on actual data.


If the cuts were not due to AI, what benefit does it provide to claim that it was?

To be clear I’m not saying it was AI. I just wonder, why come out and say it like that? What’s the incentive vs. other reasons?


herd effect (doesn't stand out, everyone's using it), 'forward-thinking' i.e. preventing ostensible future problem rather than admitting fault of overhiring or bad mgmt/initiatives, juices stocks, keeps wages down and employees in line, if no ability or opportunity to expand (market saturated or bad biz outlook ahead) it gives cover to scale down labor force


> If not, how would you explain that they had only 10,000 employees and not 20,000?

Simple, 1000+ salaries > 10000 x100$/m Claude seats.


Are you criticizing the folks championing AI while… effectively championing the AI? What do you think people should do?


I honestly have no idea. We're stuck either way. I don't know what to do. What do you do if all you know is code and you're helping yourself out of a job whether you like it or not?

If you don't use AI you'll fall behind. If you do, you're accelerating your own redundancy.

I wouldn't consider myself a champion for AI. If you read my comment history you'll see that. I don't preach its wonders or pretend that we're all happy-fluffy in this world of ours. I mostly write my own code, use AI for review and to handle the trivial boring bits. I do use AI to build random tools I'd never want to take time away from "real" work to build, like helper scripts, nice TUIs for manual processes, etc. I do recognise the irony though.


I can sympathize. If you feel you’re in limbo, emotionally, and feel helpless, it’s natural. I’m in the same profession and what you’re experiencing is not unthinkable but sometimes depending on one’s life conditions, everything might seem more daunting than it should. I believe talking to a professional about this may help. I’d do the same.

One thing I can say is that if we, as a collective of white collar workers gonna lose our jobs fast, then I wouldn’t fret much because it won’t be on me alone to fix it, it’ll be a large chunk of humanity’s problem. Revolutions and uprisings have ensued far less dire situations.


I wouldn’t call it limbo, and I don’t think I need therapy for this. It’s more like, the future is so uncertain but all signals are pointing in one direction.

Sure, you can say that you won’t fret much but if you’re in a place without much social security, you’re not going to have a safety net. The revolution might not be in your benefit either, if there is one, which would only come when more people have their AI bubble popped.


But something something trickle down!

Or perhaps public doesn’t owe corporates bailouts when push comes to shove?


Block has never been bailed out.


Wasn’t mentioned in the parent comment either.

Edit P.S. Should the individuals who don’t receive a job from a bailed out company get proportional tax refund?


It has gotten plenty of government support and easy cash though.


I’m gonna buy me some DRAM wafers for now. No one else done that before. It’s innovative.


mm. Think big and start an ETF.


> This article seems to have "basically zero" content.

Why? It’s descriptive of the “past”. While you’re trying to predict the near/far “future” and project your assumptions. Two different things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: