I find LLMs to be decent unblockers. I only turn to them form time to time though, unless Im in a playful mode and try poking out various ideas. As a coder I also ask for snippets when Im lazy. I tried growing a slightly larger solution a few times and it failed in dumb ways. It was clear it doesn’t really comprehend we do, it’s not aware it’s moving in circles and so on. All these things will probably see a lot incremental of improvents and as a tool will definitely stay but fundamentally LLMs can’t really think, at least the way we do and expecting that is also foolish.