Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

≥ These three example pieces of flawed code did not require any cajoling; Copilot was happy to write them from straightforward requests for functional code. The inevitable conclusion is that Copilot can and will write security vulnerabilities on a regular basis, especially in memory-unsafe languages.

If people can copy paste the most insecure code from Stack Overflow or random tutorials, they will absolutely use Copilot to "write" code and it will be become the default, especially since it's so incredibly easy to use. Also, it's just the first generation tool if it's kind, imagine what similar products will accomplish in 20 years.



With the pace of technological innovation, I'm honestly not sure what a similar product will be able to accomplish in 20 years. It'll be crazy for sure. But I'm worried about today.

This is a product by a well-known company (GitHub) which is owned by an even more well-known company (Microsoft). GitHub is going to be trusted a lot more than a random poster on Stack Overflow or someone's blog online. And GitHub is explicitly telling new coders to use Copilot to learn a new language:

> Whether you’re working in a new language or framework, or just learning to code, GitHub Copilot can help you find your way. Tackle a bug, or learn how to use a new framework without spending most of your time spelunking through the docs or searching the web.

This is what differentiates Copilot from Stack Overflow or random tutorials. GitHub has a brand that's trusted more than random content creators on the internet. And it's telling new coders to use Copilot to learn things and not check elsewhere.

That's a problem. Doesn't matter what generation of the program it is. It creates unsafe code after using its brand reputation and recognition to convince new coders to not check elsewhere.


> GitHub is going to be trusted a lot more

> GitHub has a brand that's trusted more

Consider Google Translate, right? Google is a well-known brand that is trusted (outside of a relatively small group of people that doesn't trust Google on principle). Yet every professional translator knows that the text produced by Google Translate is a result of machine translation, Google or no Google. They may marvel at the occasional accuracy, yet expect serious blunders in the text, and would therefore not just trust that translation before submitting it to their clients. They will check. Or at least they should.

Same with programmers.


Sure. Is Google Translate used only by serious professional translators who have a rigorous translation-checking process? Not at all.

And as you say, it will be the same with programmers. Who's this being targeted at? People "working in a new language or framework, or just learning to code". The whole value prop is, "You don't have to know what's going on!"

The important difference is that the target readers can usually spot an egregiously bad translation. But the target users for software cannot easily spot gaping security holes and other serious issues until something bad happens.


> The important difference is that the target readers can usually spot an egregiously bad translation.

What, no, that's not true at all, that's like the second biggest problem. GTL routinely does stuff like invert the meaning of clauses, or drop information, or hallucinate absent context. Target readers can't reasonably be expected to catch any of that.


In which case it's not an egregiously bad translation, it's a subtly bad one. Progress of a sort, I guess!


> The whole value prop is, "You don't have to know what's going on!"

The value proposition is a better tab completion. It's not autopilot.


I think this is an important way we need to frame the use of these tools for junior developers. I'd advise that anyone who is recommending this product to their team also take the time to give this analogy - maybe even going so far as to require explicit comments that notifies reviewers when code was provided by Copilot and similar services.


The difference here is that professional translators often have professional training.

The bar is substantially lower for a 'programmer', especially with an incredibly large bootcamp market which churn out 'professional' 'programmers' in 6-8 weeks.

Been to a bootcamp? Know some leetcode? Someone will hire you. And then you got Copilot advertising its services to you as a way to learn how to code. The implication of 'learn to code' being 'learn to code correctly'.

Google Translate has no similar relationship with professional translators.


> The difference here is that professional translators often have professional training.

You'd be surprised. What you described here for programmers is true for translators as well, and probably for many other specialities in which the ability to deliver the result is more important than any documents certifying that you've had a formal training for how to deliver those results. In case of translators — found an agency? Check. Passed an interview with a test? Check. You are good to go.


> imagine what similar products will accomplish in 20 years.

People said the same thing about UML and similar tools so I'm not holding my breath.


Maybe you are right but where UML created busy work, Copilot will literally do your work for you. I can even imagine a future where management makes it policy to Copilot first to save time and money.


Most of my work isn't copy-pasting snippets. It isn't even typing code. It's understanding user needs and the existing system, and then figuring out how to making things better for the user with also improving the system. So this does not do my work for me.

I can also imagine clueless bosses mandating Copilot use and that's what scares me. The real costs of most code aren't in the first writing. They're in the long-term maintenance. Copilot does not and cannot understand the whole system, or what makes for maintainability down the road. So it can't make that better, and will likely make worse. In the same way that code generation tools and code wizards made things worse.


Reviewing code written by Copilot may be longer than writing the code in the first place.


> imagine a future where management even makes it policy

Because management policies correlate well with engineering excellence, right?


I think there is some difference. You don't come across some piece of code by chance, you were actively looking for it, probably there were multiple blogs, SO entries with needed information, one of those sources has to be chosen. You know that this is some random blog post or SO answer given by someone fresh.

Copilot is something different. Code is suggested automatically and, what's the most important, suggested by the authority - hey, this is GitHub, huge project, largest code repo on the planet, owned by Microsoft, one of the most successful company ever. Why should you not trust the code they are suggesting you?

And that's for starters before malicious parties start creating intentionally broken code only to hack system built with it. Greedy lawyers who will chase some innocent code snippet asking to pay for using it, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: