Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Tell me how you, without knowing the code base, get the LLM to not add these classes?

Stop talking to it like a chatbot.

Draft, in your editor, the best contract-of-work you can as if you were writing one on behalf of NASA to ensure the lowest bidder makes the minimum viable product without cutting corners.

---

  Goal: Do X.

  Sub-goal 1: Do Y.

  Sub-goal 2: Do Z.

  Requirements:

    1. Solve the problem at hand in a direct manner with a concrete implementation instead of an architectural one.

    2. Do not emit abstract classes.

    3. Stop work and explain if the aforementioned requirements cannot be met.
---

For the record: Yes, I'm serious. Outsourcing work is neither easy nor fun.



Every time I see something like this, I wonder what kind of programmers actually do this. For the kinds of code that I write (specific to my domain and generates real value), describing "X", "Y", and "Z" is a very non-trivial task.

If doing those is easy, then I would assume that the software isn't that novel in the first place. Maybe get something COTS

I've been coding for 25 years. It is easier for me to describe what I need in code than it is to do so in English. May as well just write it.


> I've been coding for 25 years.

20 here, mostly in C; mixture of systems programming and embedded work.

My only experience with vibe-coding is when working under a time-crunch very far outside of my domain of expertise, e.g., building non-transformer-based LLMs in Python.


I mean, unless you just don't know how to program, I struggle to see what value the LLM is providing. By the time you've broken it down enough for the LLM, you might as well just write the code yourself.


I've been writing code for over 20 years, mostly in C.

My only experience with vibe-coding is when working under a time-crunch very far outside of my domain of expertise.

No amount of "knowing how to program" is going to give me >10 years of highly-specialized PhD-level Mathematics experience in under three months.


The how do you know it got it right?


I was provided with a battery of externally-produced tests, benchmark scripts, etc. I was told to assume that the tests were comprehensive.

Independent of this, I used competing models produced by different organizations (e.g. OpenAI vs. Google) to test & verify each other's work.

I also could, somewhat, follow along with the math itself.


Yeah, but LLM is simply faster, especially in this case where you know exactly what you need, it’s just a lot of typing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: