Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of the LLM quality discussions have a shelf life measured in weeks. The companies involved are leap frogging each other with model updates and tweaks every few weeks.

Try it yourself. I'm getting a lot of value out of just using chat gpt for coding. It's not without flaws. But I can get it to do a lot of routine stuff quite quickly. What I like about the desktop client is that a prompt is just one alt+space away. I usually just copy paste whatever I'm working on and then ask it to do stuff to it.

There's some art to the prompting and you usually have to nudge it to not be lazy and do the whole thing you asked for. It seems engineers on the other side are working really hard to minimize token usage.

I find it's increasingly the UX that's holding me back, not the model quality. Context windows are now big enough to hold a lot of stuff. But how do you get everything in there that matters? Manually copy pasting together stuff is tedious. I actually wrote a script (well, with some llm help) that flattens things in my repository into a file that I then simply attach to a conversation. Works surprisingly well.



I also found that passing all the codebase in a single file in the context works really well. I've tried Cursor et al, but I found that it not having the full context of the codebase (and having to do some back and forth for it to requests files) was slower and didn't really yield any better results. Granted I work on projects where the codebase fit in 100-200kb text files, but I'm still only at 20% of the context limit in Claude.

Also I found the UX of Claude to be better for this, especially their Projects feature. I can just put the codebase in the Project's context, and start a new conversation to ask different questions/solve different problems.

The only pain point I have is that it seems to be pretty optimized to only show changes in existing files, not rewriting them in full, which is a bit of a pain to copy-paste into my IDE. I'll see if I can write out a system prompt to force it to generate diff or a similar format that could more easily be applied automatically to my code.


Personally I've been getting great results from Claude. I was initially using chatGPT, but now I'm finding Cursor + Claude to be a good fit.

And no, I'm not taking the alternative view to you just to prove my point haha.

I feel like AI is ripe for a tabs vs spaces moment. If only Silicon Valley (the show) was still around. The material would practically...write itself...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: