Hacker Newsnew | past | comments | ask | show | jobs | submit | bmitc's commentslogin

Basically, nothing has changed except the increase in noise. So all the suits who refuse to understand what software is have yet again decided to make things worse for professionals and for people who actually know what they're doing.

The departments / roles that LLMs most deeply need to be pointed at - business development, contracts, requirements, procurement - are the places least likely get augmented, due to how technology decisions are made structurally, socially.

I've already heard - many times - that the place that needs the LLMs isn't really inside the code. It's the requirements.

History has a ton of examples of a new technology that gets pushed, but doesn't displace the culture of the makers & shakers. Even though it is more than capable of doing so and indeed probably should.


Not sure: The LLMs seem to be okay at coding recently but still horrible at requirements and interface design. They never seem to get the perspective right.

One example I recently encountered: The LLM designed the client/consumer facing API from the perspective of how it's stored.

The result was a collection of CRUD services for tables in a SQL db rather than something that explained the intended use.

Good for storage, bad for building on it.


Can you expand on what you think software is in this context?

Why do you think taht the suits refuse to understand what it is?


Although I appreciate the question, I don't necessarily want to get into the discussion about what software is here. This is mainly due to the lack of time I currently have.

As an elaboration, software seems to be this thing that everyone thinks they know about. This is extremely common in my experience in that scientists (particularly physicists and engineering-based scientists), hardware engineers, "the suits", as it were, and more think that code is just this dumb, simple thing that you slap on top of things like algorithms, hardware, feature requests, etc. They continually fail to realize that the overhead and interruptions that happen when software is onboarded is entirely because it is software engineers that understand how to design and think about systems and that it's the software engineers who end up discovering that no one has done requirements gathering, performed holistic systems thinking, understood edge cases, thought about user workflows, etc. This is even though that they think this work is already "done" and "just needs software" now.

I'm still struggling to understand why this is the case for software.

So my point was that everyone who isn't a software engineer has this opinion that it's going to automate the software engineer away, and yet, they have failed to _actually ask the professionals in this domain_ as to what they think. The decision has been made because, again, software is just this thing that is the last mile for things that have already been thought about and are "done". They think that automating software will get rid of the pesky software engineer who gets in the way between the thing that's "done" and this last mile being completed.


To the "suits" AI means "efficiency".

Efficiency also means to them, "less costs" and when they talk about "costs" they mean "headcount" which that is employees.

Put it together and the suits want to reduce headcount using AI.

To them, "clean code" is a scam and a waste of time that doesn't yield them quick returns, but a weak reason for software engineers to justifying their roles.


There are not many good resources on Kalman filters. In fact, I have found a single one that I'd consider good. This is someone who has spent a lot of time to newly understand Kalman filters.

Link to that good one?

It was a typo. I meant to say I haven't found a good one yet.

There's also settings available in some offerings and not in others. For example, the Anthropic Claude API supports setting model temperature, but the Claude Agent SDK doesn't.

> This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now: A realization that AI coding tools can be a large accelerator but you need to learn how to use them correctly in your workflow and you need to remain involved in the code. It’s not as clickbaity as the extreme takes that get posted all the time. It’s a little disappointing to read the part where they said hard work was still required. It is a realistic and balanced take on the state of AI coding, though.

I appreciate the balanced takes and also the notion that one can use these AI tools to build software with principled use.

However, what I am still failing to see is concrete evidence that this is all faster and cheaper than just a human learning and doing everything themself or with a small team. The cat is out of the bag, so to speak, but I think it's still correct to question these things. I am putting in a _lot_ of work to reach a principled status quo with these tools, and it is still quite unclear whether it's actually improvement versus just a side quest to wrangle tools that everyone else is abusing.


American citizens have willingly given up their freedom and allowed themselves to be captured by corporate control.

I hope to god this comes into vogue in the U.S.

Big f'ing surprise.

The source is linked to in this thread. Is that not the source code?

Of what possible purpose is this man's DNA except for framing him for crimes later?

Take a look at Anthropic's repo. They auto-close issues after just a few weeks.

I don't think I've seen an issue of theirs that wasn't auto-closed.


Wait, isn’t software engineering a solved problem?


Yes, that’s why they have such great up time. They don’t go down multiple times per day.


Yes


I just randomly plugged in numbers to look up random Claude issues on github, and out of the 20 I checked, only one was closed as fixed. :-(

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: