Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seconded! I'm using LLMs in many different ways—like you—starting with small troubleshooting tasks, quick shell scripts, coding, or simply asking questions.

I use a wide variety of tools. For more private or personal tasks, I mostly rely on Claude and OpenAI; sometimes I also use Google or Perplexity—whichever gives the best results. For business purposes, I either use Copilot within VSCode or, via an internal corporate platform, Claude, OpenAI, and Google. I’ve also experimented a bit with Copilot Studio.

I’ve been working like this for about a year and a half now, though I haven’t had access to every tool the entire time.

So far, I can say this:

Yes, LLMs have increased my productivity. I’m experimenting with different programming languages, which is quite fun. I’m gaining a better understanding of various topics, and that definitely makes some things easier.

But—regardless of the model or its version—I also find myself getting really, really frustrated. The more complex the task, the more I step outside of well-trodden paths, and the more it's not just about piecing together simple components… the more they all tend to fail. And if that’s not enough: in some cases, I’d even say it takes more time to fix the mess an LLM makes than it ever saved me in the first place.

Right now, my honest conclusion is this: LLMs are useful for small code completion tasks, troubleshooting and explaining —but that’s about it. They’re not taking our jobs anytime soon.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: