This exactly. LLMs can't reason, so we shouldn't expect them to try. They can do translation extremely well, so things like converting descriptions to 90-95% correct code in 10-100x less time, or converting from one language to another, are the killer use cases IMO.
But expecting them to solve difficult unsolved problems is a fundamental misunderstanding of what they are under the hood.
I picked this problem specifically because it's about "converting from one language to another". The problem is already solved in the literature. I understand that doing cutting edge research is a different problem, and that is explicitly not what I'm doing here, nor what I am expecting of the tool. I have coauthored an actual published computer science paper, and this excercise is VERY far from the complexity of that.
Could you share some concrete experience of a problem where aider, or a tool like it, helped you? What was your workflow, and how was the experience?
But expecting them to solve difficult unsolved problems is a fundamental misunderstanding of what they are under the hood.