I'm pretty sure that this conclusion isn't new, but I've come
to think that Copilot shouldn't be thought of as a better
developer, but merely a quicker one. Obviously its code
will be somewhat average, considering that it's been trained
on code the only unifying characteristic of which is that it's
public.
Something like Copilot, but trained explicitly to analyse
the code instead of writing it could be much more useful, imo.
Basically a real-time code review tool. There are similar
tools already, but I'm talking of something that is able to
learn from the actual codebase being worked on, perhaps
including the documentation, and giving on-the-go feedback.
If you interviewed two developers, one who produces reasonably correct code in a given amount of time, and another one who produces code which is subtly incorrect most of the time, but much faster, which one would you hire?
The problem with your proposal is that it's relatively easy to do what Copilot does at the moment using AI, i.e. guess what code you are looking for and find something that does (or says it does) more or less that. However, which codebase would you use to check against if the generated code is really correct? The same codebase that produced the more-or-less-correct code in the first place?
I like this idea, given that it takes advantage of how git repos are made of bug-fixes. How many git diffs are out there that update a '=' in an if statement to '=='?
so an AI copilot should be watching out for code I write that looks similar to code that was updated in another repo. It could even use the text from issues to synthesize a suggestion of why your code might cause problems!
Something like Copilot, but trained explicitly to analyse the code instead of writing it could be much more useful, imo. Basically a real-time code review tool. There are similar tools already, but I'm talking of something that is able to learn from the actual codebase being worked on, perhaps including the documentation, and giving on-the-go feedback.