Hacker Newsnew | past | comments | ask | show | jobs | submit | yoaviram's commentslogin

Yesterday I wrote a post about exactly this. Software development, as the act of manually producing code, is dying. A new discipline is being born. It is much closer to proper engineering.

Like an engineer overseeing the construction of a bridge, the job is not to lay bricks. It is to ensure the structure does not collapse.

The marginal cost of code is collapsing. That single fact changes everything.

https://nonstructured.com/zen-of-ai-coding/


> wrote

Quite a heavy-lifting word here. You understand why people flagged that post right? It's painfully non-human. I'm all for utilizing LLM, but I highly suggest you read Simon's posts. He's obviously a heavy AI user, but even his blog posts aren't that inorganic and that's why he became the new HN blog babe.

[0]: I personally believe Simon writes with his own voice, but who knows?


How paranoid do you want to get? Simone's written enough, such that you could just feed his blog to AI and ask it to write in his voice. Which, taken to the logical extreme, means that the last time he went to visit OpenAI, he was captured, and locked in a dungeon, and his online presence is now entirely AI with the right prompt. In fact, that's happened to everyone on this site, and we're all LLMs just predicting the next word at each other.

There's no actual way to determine if any words are from a silicon token generator or meat-based generator. It's not AI, it's human! Emdash. You're absolutely right!

system failure.


We have the entire web built on technical debt and LLMs mostly trained on that, what could go wrong? Cost will reside somewhere else if not on code

> It is much closer to proper engineering.

I would not equate software engineering to "proper" engineering insofar as being uttered in the same sentence as mechanical, chemical, or electrical engineering.

The cost of code is collapsing because web development is not broadly rigorous, robust software was never a priority, and everyone knows it. The people complaining that AI isn't good enough yet don't grasp that neither are many who are in the profession currently.


> The people complaining that AI isn't good enough yet don't grasp that neither are many who are in the profession currently.

I think the externalities are being ignored. Having time and money to train engineers is expensive. Having all the data of your users being stolen is a slap in the wrist.

So replacing those bad worekrs with AI is fine. Unless you remove the incentives to be fast instead of good, then yeah AI can be good enough for some cases.


Indeed, it's like those complaining self-driving cars occasionally crash when their crash rates are up to 90% less than humans . . .

You didn't write that and you shouldn't believe that you did.

This is such a strange take. Your words remind me of past crypto hype cycles, where people pushed web3.0 and NFT FOMO hysteria.

Engineering is the practical application of science and mathematics to solve problems. It sounds like you're maybe describing construction management instead. I'm not denying that there's value here, but what you're espousing seems divorced from reality. Good luck vibecoding a nontrivial actuarial model, then having it to pass the laundry list of reviews and having large firms actually pick it up.


> This is such a strange take. Your words remind me of past crypto hype cycles, where people pushed web3.0 and NFT FOMO hysteria.

Thats a little harsh. I think most everyone would agree we're in a transformative time for engineering. Sure theres hype, but the adoption in our profession (assuming you're an engineer) isn't waning.


It's not pleasant to read this.

    The claim here is profound: comprehension of the codebase at the function level is no longer necessary
It's not profound. It's not profound when I read the exact same awed blog post about how "agentic" is the future and you don't even need to know code anymore.

It wasn't profound the first time, and it's even dumber that people keep repeating it - maybe they take all the time they saved not writing, and use it to not read.


Stop putting forth your AI generated blog posts as your own work.

Agree. This is a transition from being "in" the loop to being "on" the loop.

The formal engineering disciplines are not defined by the construction vs design distinction so much as the regulatory gates they have passed and the ethical burdens they shoulder for society's benefit.

https://www.slater.dev/2025/09/its-time-to-license-software-...


I just finished writing a post about exactly this. Software development, as the act of manually producing code, is dying. A new discipline is being born. It is much closer to proper engineering.

Like an engineer overseeing the construction of a bridge, the job is not to lay bricks. It is to ensure the structure does not collapse.

The marginal cost of code is collapsing. That single fact changes everything.

https://nonstructured.com/zen-of-ai-coding/


> I just finished writing a post about exactly this. Software development, as the act of manually producing code, is dying.

It was never that. Take any textbook on software engineering and the focus was never the code, but on systems design and correctness. I'm looking at the table of contents of one (Software Engineering by David C. Kung) and these are a few sample chapters:

  ...
  4. Software Requirement Elicitation
  5. Domain Modelling
  6. Architectural Design
  ...  
  8. Actor-System Interaction Modeling
  9. Object Interaction Modeling
  ...
  15. Modeling and Design of Rule-Based Systems
  ...
  19. Software Quality Assurance
  ...
  24. Software Security
What you're talking about was coding, which has never been the bottleneck other than for beginners in some programming languages.

Our CEO, an expert in marketing has discovered Claude Code and is the one having the most open PR of all developers and is pushing for us to « quickly review ». He does not understand why review are so slow because it’s « the easiest part ». We live in a new world.

In what world do these new tools help with "laying bricks", but not with ensuring that the structure does not collapse? How is that work any more difficult than producing the software in the first place? It wasn't that long ago that these tools could barely produce a simple program. If you're buying into the promises of this tech, then what's stopping it from also being able to handle those managerial tasks much better than a human?

The seemingly profound points of your marketing slop article ignore that these new tools are not a higher level of abstraction, but a replacement of all cognitive work. The tech is coming for your job just as it is coming for the job of the "bricklayer" you think is now worthless. The work you're enjoying now is just a temporary transition period, not an indication of the future of this industry.

If you enjoy managing a system that hallucinates solutions and disregards every other instruction, that's great. When you reach a dead end with that approach, and the software is exposing customer data, or failing in unpredictable ways, hopefully you know some good "bricklayers" that can help you with that.


Accountability then

Anticipating modes of failure, creating tooling to identify and hedge against risks.

If we could do this it would have been done already. Outsourced devs would be ubiquitous.

This thread reads like an advertisement for ChatGPT Health.

I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"

OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.

https://consciousdigital.org/chatgpt-health-is-a-marketplace...


> This thread reads like an advertisement for ChatGPT Health.

This thread has a theme I see a lot in ChatGPT users: They're highly skeptical of the answers other people get from ChatGPT, but when they use it for themselves they believe the output is correct and helpful.

I've written before on HN about my friend who decided to take his health into his own hands because he trusted ChatGPT more than his doctors. By the end he was on so many supplements and "protocols" that he was doing enormous damage to his liver and immune system.

The more he conversed with ChatGPT, the better he got at getting it to agree with him. When it started to disagree or advise caution, he'd blame it on overly sensitive guardrails, delete the conversation, and start over with an adjusted prompt. He'd repeat this until he had something to copy and paste to us to "prove" that he was on the right track.

As a broader anecdote, I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people. This combined with the TikTok trend of diagnosing everything as a symptom of ADHD is becoming really alarming. In some cohorts, it's a rarity for someone to believe they don't have ADHD. There are also a lot of complaints from people who are angry their GP wouldn't just write a prescription for Adderall and tips for doctor shopping around to find doctors who won't ask too many questions before dispensing prescriptions.


> I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people

This may be caused by ChatGPT response patterns but doesn't necessarily mean there is an increase of false (self-)diagnoses. The question is: What is alarming about the increasing rate of diagnoses?

There has been an increase of positive diagnoses over the last decades that have been partially attributed to adult diagnoses that weren't common until (after) the 1990s and the fact that non-male patients often remained undiagnosed because of a stereotypical view on ADHD.

If the diagnosis helps, then it's a good thing! If it turns out that 10% of the population are ADHDers then let's see how we can change our environment that reflects that fact. In many cases, meds aren't needed as much when public spaces provide the necessary facilities to retreat for a few minutes, wear headphones, chew gum or fidget.

The story of your friend sounds very bad and I share your point here, completely. But concerning ADHD, I still don't see what's bad about the current wave of self-diagnoses. If people buy meds illegally, use ChatGPT as a therapist, etc. THAT is a problem. But not identifying with ADHD itself (same for Autism, Depression, Anxiety and so on).

ADHD may or may even be a reinforcing factor for a LLM user to be convinced by the novelty of the tool - but that would have to be empirically evaluated. If it were so, then this could even contribute to a better rate of diagnoses without ChatGPT capabilities in this field contributing much to the effect. Many ADHDers suffer from failing at certain aspects of daily life over and over and advice that helps others only makes them feel worse because it doesn't work for them (e.g. building habits or rewarding oneself for reaching a milestone can be much more difficult for ADHDers than non-ADHDers). I'm just guessing here and this doesn't count for all ADHDers, but: Whenever a new and possibly fun tool comes along that feels like an improvement, there can be a spark of enthusiasm that may lead to an increased trust. This usually decreases after a while and I guess giving LLMs a bit more time of being around, the popularity in this field may also decrease.


I don't see why they shouldn't be sued by misleading people with such products


Great write up. I'd even double down on this statement: "You can opt in to chat history privacy". This is really "You can opt in to chat history privacy on a chat-by-chat basis, and there is no way to set a default opt-out for new chats".


This. It’s the same play with their browser. They are building the most comprehensive data profile on their users and people are paying them to do it.


Is this any worse than Google? Seems like the same business model.


There are lots of companies that do this. Doesn't make it right.

The real "evil" here is that companies like Meta, Google, and now OpenAI sell people a product or service that the customer thinks is the full transaction. I search with Google, they show me ads - that's the transaction. I pay for Chatgpt, it helps me understand XYZ - that's the transaction.

But it isn't. You give them your data and they sell it - that's the transaction. And that obscurity is not ethical in my opinion.


> You give them your data and they sell it - that's the transaction

I think that's the wrong framing. Let's get real: They're pimping you out. Google and Meta are population-scale fully-automated digital pimping operations.

They're putting everyone's ass on the RTB street and in return you get this nice handbag--err, email account/YouTube video/Insta feed. They use their bitches' data to run an extremely sophisticated matchmaking service, ensuring the advertiser Johns always get to (mind)fuck the bitches they think are the hottest.

What's even more concerning about OpenAI in particular is they're poised to be the biggest, baddest, most exploitative pimp in world history. Instead of merely making their hoes turn tricks to get access to software and information, they'll charge a premium to Johns to exert an influence on the bitches and groom them to believe whatever the richest John wants.

Goodbye democracy, hello pimp-ocracy. RTB pimping is already a critical national security threat. Now AI grooming is a looming self-governance catastrophe.


I think you just wrote a treatment for the next HBO Max sunday drama


And it's not only your data, that makes it much worse.

"You are the product" is a good catchphrase to make people understand. But actually when you search or interact with LLMs, you provide not only primary data about yourself but also about other people by searching for them in connection with specific search terms, by using these services from your friend's house which connects you to their IP-Address, by uploading photos of other people etc.

"You are the product and you come with batteries (your friends)."


Does Google have your medical records? It doesn't have mine.


They tried to at one point with "google health". They are still somewhat trying to get that information with the fitbit acquisition.


People email about their medical issues and google for medical help using Gmail/Google Search. So yes, Google has people's medical records.


If you hear me talking to someone about needing to pick up some flu medicine after work do you have my medical records?


No, but if I hear you telling someone you have the flu and are picking up flu medicine after work then I have a portion of your medical records. Why is it hard for people on HN to believe that normal people do not protect their medical data and email about it or search Google for their conditions? People in the "real world" hook up smart TV's to the internet and don't realize they are being tracked. They use cars with smart features that let them be tracked. They have apps on their phone that track their sentiments, purchases, and health issues... All we are seeing here is people getting access to smart technology for their health issues in such a manner that they might lower their healthcare costs. If you are an American you can appreciate ANY effort in that direction.


Maybe stop by to consider that knowing a few scattered facts and having your complete medical records is not the same thing, Hemingway.


how do you know they don't?


Since when is Google the model to emulate?


Depends on your goals. If you are starting a business and you see a company surpass the market cap of Apple, again, then you might view their business model as successful. If you are a privacy advocate then you will hate their model.


Well you said "is this any _worse_" (emphasis mine) and I could only assume you meant ethically worse. At which point the answer is kind of obvious because Google hasn't proven to be the most ethical company w.r.t. user data (and lots of other things).


since always


May your piece stay at the highest level of this comment section.


I get that impression too - but also it's HN and enthusiastic early adoption is unsurprising.

My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.

The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.

The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.

Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.


Sharing my experience with SpecKit in case anyone finds it useful.

I've been using Speckit for the last two weeks with Claude Code, on two different projects. Both are new code bases. It's just me coding on these projects, so I don't mind experimenting.

The first one was just speckit doing its thing. It took about 10 days to complete all the tasks and call the job done. When it finished, there was still a huge gap. Most tests were failing, and the build was not successful. I had to spend an equally long, excruciating time guiding it on how to fix the tests. This was a terrible experience, and my confidence in the code is low because Claude kept rewriting and patching it with many fixes to one thing, breaking another.

For the second project, I wanted to iterate in smaller chunks. So after SpecKit finished its planning, I added a few slash commands of my own. 1) generate a backlog.md file based on tasks.md so that I don't mess with SpecKit internals. 2) plan-sprint to generate a sprint file with a sprint goal and selected tasks with more detail. 3) implement-sprint broadly based on the implement command.

This setup failed as the implement-sprint command did not follow the process despite several revisions. After implementing some tasks, it would forget to create or run tests, or even implement a task.

I then modified the setup and created a subagent to handle task-specific coding. This is easy, as all the context is stored in SpecKit files. The implement-sprint functions as an orchestrator. This is much more manageable because I get to review each sprint rather than the whole project. There are still many cases where it declares the sprint as done even though tests still fail. But it's much easier to fix, and my level of trust in the code is significantly higher.

My hypothesis now is that Claude is bed at TDD. It almost always has to go back and fix the tests, not the implementation. My next experiment is going to be to create the tests after the implementation. This is not ideal, but at this point, I'd rather gain velocity, since it would be faster for me to code it myself.


Essentially what this article is asking for, in most cases, is a better UI/UX for one of the foundation models.


This is one of several deceptive design patterns used by data brokers. Last year, we (the nonprofit consciousdigital.org) published a guide titled "How Deceptive Design is Used to Compromise Your Privacy and How to Fight Back". It contains 10 data protection deceptive patterns and countermeasures:

https://consciousdigital.org/wp-content/uploads/2023/04/dece...


It seems to me that the ongoing “vibe coding” debate on HN, about whether AI coding agents are helpful or harmful, often overlooks one key point: the better you are as a coder, the less useful these agents tend to be.

Years ago, I was an amazing C++ dev. Later, I became a solid Python dev. These days, I run a small nonprofit in the digital rights space, where our stack is mostly JavaScript. I don’t code much anymore, and honestly, I’m mediocre at it now. For us, AI coding agents have been a revelation. We are a small team lacking resources and agent let us move much faster, especially when it comes to cleaning up technical debt or handling simple, repetitive tasks.

That said, the main lesson I learned about vibe coding, or using AI for research and any other significant task, is that you must understand the domain better than the AI. If you don’t, you’re setting yourself up for failure.


I think it's the opposite , the better you are as a coder and know your domain, the better you can use ai tools. someone with no expertise is set up for failure


Totally agree. I see LLM assistance as a multiplier on top of your existing expertise. The more experience you have the more benefit you can get.


Indeed, I can predict a huge gulf between pre-vibe senior engs and post-vibe lazy learners: the seniors get massive amplification and meanwhile those on the ground floor are not learning, and even gradually loose what little they did learn


I have to add that working effectively with LLMs is a skill too, mostly in terms of prompting and system level prompts to skip _most_ of the fabrication and nonsense.

They have to be explicitly told often to keep things brief, non-fiction and non-sycophantic.

Then you still need to curate responses, but less so.


Agreed, and being productive with Claude Code and similar cli tools requires being deliberate about creating docs for background info, spec, and implementation plan, and final implementation notes.


Domain knowledge is key I agree. I think we’re going to see waterfall development come back. Domain experts, project managers and engineers gathering requirements and planning architecture up front in order to create the ultra detailed spec needed for the agents to succeed. Between them they can write a CLAUDE.md file, way of working (“You will do TDD, update JIRA ticket like so”) and all the supporting context docs. There isn’t the penalty anymore for waterfall since course corrections aren’t as devastating or wasteful of dev hours.


TDD seems to be a good strategy if you trust the AI not to cheat by writing tests that always pass.


You need to keep the TDD LLM and code LLM as a "ping pong pair" with you as the curator / moderator.


> That said, the main lesson I learned about vibe coding, or using AI for research and any other significant task, is that you must understand the domain better than the AI. If you don’t, you’re setting yourself up for failure.

Only if you fully trust it works. You can also first take time to learn about the domain and use AI to assist you in learning it.

This whole thing is really about assistance. I think in that sense, OpenAI's marketing was spot on. LLMs are good at assisting. Don't expect more of them.


The only "overlooked" part of "vibe coding" conversations on HN appear to be providing free training for these orgs that host the models, and the environmental and social impact of doing so.


>Trust and privacy are at the core of our products. We give you tools to control your data—including easy opt-outs and permanent removal of deleted ChatGPT chats (opens in a new window) and API content from OpenAI’s systems within 30 days.

No you don't. You charge extra for privacy and list it as a feature on your enterprise plan. Not event paying pro customer get "privacy". Also, you refuse to delete personal data included in your models and training data following numerous data protection requests.


Except all users can opt out. Am I missing something?

It says here:

> If you are on a ChatGPT Plus, ChatGPT Pro or ChatGPT Free plan on a personal workspace, data sharing is enabled for you by default, however, you can opt out of using the data for training.

Enterprise is just opt out by default...

https://help.openai.com/en/articles/8983130-what-if-i-want-t...


Indeed. Click your profile in the top right, click on the settings icon. In Settings, select "Data Controls" (not "privacy") and then there's a setting called "Improve the model for everyone" (not "privacy" or "data sharing") and turn it off.


so they technically kind of follow the law but make it as hard as possible?


Personally I feel it's okay but kinda weird. I mean why not call it privacy. Gray pattern, IMHO. For example venice.ai simply doesn't have a privacy setting because they don't use the data from chats. (They do have basic telemetry, and the setting is called "Disable Telemetry Collection").


Not sharing you data with other users does not mean the data of a deleted chat are gone, those are very likely two completely different mechanisms.

And whether and how they use your data for their own purposes isn't touched by that either.


what about all the rest of the data they use for training, there's no opt out from that


This is a typical "corporate speak" / "trustwahsing" statement. It’s usually super vague, filled with feel-good buzzwords, with a couple of empty value statements sprinkled on top.


Software is a liability, a product is an asset.


If what you are saying is that significant parts of the tech landscape will change, then that's exactly the point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: