If there are no human software engineers, who will be legally and financially responsible for the code that is shipped into production? Will OpenAI, Anthropic et al assume responsibility for damages when critical systems fail for any of their users, or will it be the non-SWE user who gave the prompt(s)?
That is what I would assume. So, unless I am overlooking something, this seems like a very bad idea for a company to have zero engineering capability in-house who can read and validate the generated application code, test suites etc before it deploys into production where real end users could be harmed by faulty code generated by the LLM.
For throwaway prototypes, coding and design assistance, I can see it being leveraged very effectively, but for mission-critical software systems I can’t see it going this route or we’ll have some very big problems incoming