Doesn't matter. First, yes, a modern AI will come back and ask questions. Second, the AI is so much faster at interactions than a human is, that you can use that saved time to glance at its work and redirect it. The AI will come back with 10 prototype attempts in an hour, while a human will take a week for each, with more interupt questions for you about easy things
Sure, LLMs are a useful tool, and fast, but the point is they don't have human level intelligence, can't learn, and are not autonomous outside of an agent that will attempt to complete a narrow task (but with no ownership and guarantee of eventual success).
We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.
If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.
> the point is they don't have human level intelligence
> If you want to ASSIGN a task to something/someone then you need a human or artificial human
Maybe you haven't experienced it but a lot of junior devs don't really display that much intelligence. Their operating input is a clean task list, and they take it and convert it into code. It's more like "code entry" ("data entry", but with code).
The person assigning tasks to them is doing the thinking. And they are still responsible for the final output, so if they find a computer better and cheaper at "code entry" than a human well then that's who they're assign it to. As you can see in this thread many are already doing this.