Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Instead, you head over to Reddit where the programming community is much nicer

Is this true? Wondering if there's a more objective way to know than endless anecdotes. I think programming communities stereotypically are pretty mean. I don't really see reddit as a "safe place". But maybe there are smaller subreddits where being wrong doesn't make you feel awful?

> You can even go to ChatGPT, where it’ll give you a confidently wrong answer that looks so correct that you’ll spend another 7 hours debugging

I'm quite perplexed by this same talking point being regurgitated. These LLMs do indeed hallucinate. But I've found, with coding problems, that it's very easy to see it's wrong if you're working in a domain you're familiar with. I am doing a lot of react development with chatgpt(gpt4) as a kind of intern-on-steroids and it's working really well. I can usually identify when it's being silly as I've worked with react for a few years. Ofc without that it's hard. But even if I'm in unfamiliar territory I can ask it to write tests to confirm its code works. I can also hand it stack-traces and it'll usually be very helpful at debugging its own code.

An e.g. I am not competent at shell stuff but it's been such a boon at helping me hack and pipe stuff together. Actually two days ago I wanted to generate a big bird's eye grid of a huge PDF document. I had no idea how to and asked it point-blank to write some code. Within a couple messages it generated a python script w/ PIL and a pdf2image imports and shell commands to get things installed and $PATH properly configured. One cycle of debugging because I was missing a dependency, and boom, done. Took me 5 mins. Would have taken 30mins or more otherwise (and a tonne of pointless cognition/research/rabbit-holes).



GPT-4 is my one-stop-shop for 80% of programming-related questions, and I get much more useful feedback as I am able to have a live conversation and drill into anything and everything.

Every interaction with GPT-4 makes me a better programmer. It's also obvious when things aren't right: The problem isn't solved. So I also become a better mentor as I try to coax the right answers out of GPT-4. I ask it to explain its reasoning, I ask it to go over pros/cons of different approaches, I ask it to give me security reviews of provided code. GPT-4 really shines in filling in the gaps for old/new APIs where I haven't RTFM.

But I don't rely on it for correctness. That is my job as an engineer. I am just seeing the same stupid arguments play out that got played out over IDEs, higher-level languages, etc.

Anything that makes me a faster and better programmer is worth it, even if it comes with caveats.


Very much agree with what you've said here, and I love the idea that it makes me a better programmer/mentor. Hadn't really thought of it that way!


My first interaction with LLMs for programming was asking ChatGPT about one of our interview questions: sending a request in tcp, sending an udp packet, and an icmp request. It confidently wrote code using TcpSocket (correct), UdpSocket (correct) and IcmpSocket (hallucination). Further attempts to tell it that it was incorrect ended up with more and more incorrect code. Guess that Rust is not common enough for it to know it well.


For what it’s worth, I actually laughed out loud at the idea of Reddit programming being nice at all, let alone nicer than SO. My wife has plenty of horror stories when she was learning to program.


The problem is that SO is so toxic that it makes Reddit's toxicity seem rather tolerable.


The few times I've tried using ChatGPT or another LLM as a coding assist, the "confidently wrong answer that looks correct" was the entirety of my experience. (Mostly the failure mode was mixing up incompatible instructions from various versions of the framework or toolchain: even if I specify a version number it'll still often want to use syntax or functions that don't exist in that version.) I did not find it to be a time saver.


Hmm fair. It's strange that our experiences are so different. Can I ask what types of problems you ask it to solve? FWIW I've had to take quite a lot of time figuring out how to talk to it in a way that gets good results.


The one where I struggled the longest was trying to put together the right webpack configuration to generate multiple static files based on input in markdown format. It kept switching which plugins it wanted me to use, or mixed up functions from conflicting plugins, and often mixed up syntax from different versions of webpack itself.

I finally gave up on that one when it got caught in a loop somehow where it apologized for giving me the wrong line of code for an import, gave an obviously wrong explanation for why it didn't work, then "corrected" it to the exact same line of code.

Another attempt I was asking it to compare different ways for measuring the amount of difference between data trees -- it did give me the names of a couple of different algorithms, and very wordy, plausible-looking descriptions of how each of them worked... neither of which was terribly helpful, because they both boiled down to "recursively examine the tree and tally up the differences."

Asked for an implementation example, it gave me code that expected input arrays of pre-calculated edit costs, and suggested I write my own function to convert the tree data into that array format.

So that one was extra weird, in that it wasn't wrong, just unhelpful, like here I'll do the easy part for you and leave the thing you asked about as an exercise for the reader.

I dunno, maybe with practice I could learn how to drag it towards helpfulness, but for now RTFM still seems easier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: