Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I still have yet to find ChatGPT useful for topics where I have no clue what the right answer is. It's still not reliable or specific enough.


I use it for things that I know the answer to, but takes much longer to implement by hand. It works relatively well for that. If it does not know, I can nudge it in the right direction, and is still faster than doing it by hand. I built many projects with it. I could have built them by hand, and indeed I started, but have not completed. I finally managed to finish projects I have not finished prior to this because there was just so much to do and so little time.


I use it for cheap one off stuff


I stopped using LLMs entirely. Proper googling, which isn't broken, btw, doesn't cost that much more time, I realised and it let's me have more secondary and ternary related factual info that I didn't ask for, because how would I even know yet?

So spending just a little more time on the (re)search via standard methods provides me with more extensive, fact-checked, reviewed and discussed information than any of the LLMs that I used did.


Hmm, such as? What would you consider one off stuff?


Using aider to make a simple Python script to fetch that csv and graph it on matplotlib was an acceptable experience. The worst part is how Claude refused to use seaborn.

I used it to write a little bit of bash glue for a build system too. I’m no bash expert, but the script was only 20 lines, so I went through each and it was correct and I learned a bit of the % syntax.

Not really something I like spending $20 a month for, so nowadays if i feel like using ai I just use llama.


Isn't this more of an "anything is better than google ad-ified and seo optimized trash pile" moment?

Even if the AI is wrong a percentage of the time, I almost prefer "dumb/buggy wrong" vs "capitalist lies wrong" Of course neither actually deserves a future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: