Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with the general optimism in your take, however, I am unsure whether people without any training in ex-novo production (ie, writing from scratch) will be able to operate competently, at least in the realm of code.

I am blown away by Copilot's capabilities (haven't tried ChatGPT yet, it's down right now), but I can only operate it because I have written tons of code beforehand. I am getting the best results from it by writing prompts in a language that sometimes includes domain-specific language ("accounts", etc.) but still heavily leans on programming constructs ("write a map with a tuple of integers as a key, and a string as a value")

Maybe there's a way of learning how to code without ever writing a genuine line by yourself. You learn some rudiments, read the language docs, start writing prompts and create a testing loop based on expected outputs. Maybe. But it's uncharted territory.

Same question goes for good ole writing - can you produce new Hemingways with the process you describe? I know there is a certain vision where the AI will be the new Hemingway but you are talking about a shorter term (~10 years) where it's unlikely will be the case... Will kids be able to properly write if they only train to be editors?

As an addendum, I was at GitHub conf and saw Prof. Malan of Harvard CS50 fame claiming that at present his course will not allow the use of tools like Copilot. I have taken the course and they have plagiarism control systems but I don't know how they are going to control this potential explosion.



>> I am blown away by Copilot's capabilities (haven't tried ChatGPT yet, it's down right now), but I can only operate it because I have written tons of code beforehand. I am getting the best results from it by writing prompts in a language that sometimes includes domain-specific language ("accounts", etc.) but still heavily leans on programming constructs ("write a map with a tuple of integers as a key, and a string as a value")

Hmm, this is a good point. I get incredible results from Copilot but I realized just now it's primarily because I have a fairly deep understanding of computer science as it pertains to the domain of code I'm writing in, and thus my comments that prompt the AI are quite accurate, fleshed out, and detailed.

If I didn't know any of the underpinning of the algorithms and data structures (like a new graduate from a code bootcamp for example), it might be much tougher to make hay out of Copilot's AI.

Never thought of it like that.


Agreed. In fact, this is the same line of reasoning my teachers used in elementary school when we asked them "Why do we need to learn how to add/subtract/multiply/divide, when calculators can do it automatically for us?" And the answer is still the same: you need to have done it the "manual" way before you can understand what the machine is doing for you.

I only understand what a compiler is doing after having written one (albeit a tiny one) myself during my studies.


> I only understand what a compiler is doing after having written one (albeit a tiny one) myself during my studies.

How many people need to understand how a compiler works to sling some code for a paycheck or just for fun.

I learned how compilers work purely for fun, it provides me absolutely no benefit in my day to day life, and before that I taught myself how to code for the exact same reason. If I had to go the opposite route, learn how compilers work before learning to code, I seriously doubt I’d have taken the time.

Other things are the same, I’m horrible at doing math but managed to learn enough linear algebra to finish a coding project I was interested in doing. If I would have had to code up all the hard bits instead of calling matrix1 * matrix2 that project would never have happened.

Another layer of abstraction isn’t always a bad thing, these AIs will eventually figure out how to write bug free code when you tell it to “write a function which calculates the flight velocity of a sparrow carrying a coconut”.


> How many people need to understand how a compiler works to sling some code for a paycheck

Only the ones who want a good job.

One of the consequences of automation has been a widening gap between top jobs and ordinary jobs. It's plausible that AI will continue the same process. Many jobs that pay well and are intellectually and socially rewarding today may become similar to airport security checks in the future. You observe what a machine does, double-checking its conclusions and looking for anything out of ordinary. And you often fail at that, because the human brain is poorly suited for such tasks.

When the future is uncertain, its good to have a deeper and wider understanding of things than what your current job requires.


> How many people need to understand how a compiler works to sling some code for a paycheck or just for fun.

I think it's pretty important. I use my very basic knowledge of how compilers work occasionally in my mostly-interpreted code world to diagnose and debug performance issues.

I also used a low-level debugger and taint/trace method recently to diagnose Windows DLL issues that happened after X seconds of starting up a program that interfaced with specialty hardware. A couple young early-career technicians/engineers watched me do it and were incredibly flabbergasted and figured me for some sort of genius when in reality I kinda only know how it works and was just reading the stack/heap as I was paging through; knowledge I acquired from booting SoftICE a few times as a kid.

There's a real lack of simply knowing how a computer and/or a network works, which is to be expected since layers and layers of abstraction have made it very easy to teach computer literacy and even computer programming on many levels. The issue is that when things go wrong and Stack Overflow doesn't have the answers, almost all young engineers I work with have no experience at low-level analysis and debugging, very little knowledge of first principles, and don't know where to Google to start acquiring it.

This isn't me shaking my fist at the kids or anything. They're more productive than I was at their age/experience level with code, and that's due to better knowledge and integration using IDEs which I never had, plus likely better high-level understanding of what's going on. But I fear a generation of developers and technicians who largely have little understanding of how computers work is problematic.


First attempt it didn't try to respond, just said it didn't know anything about sparrows or coconuts.

I asked it to retry, and got this:

function calculateSparrowFlightVelocity(coconutWeight) { // Assume that a sparrow can carry a maximum of 1.5 ounces of weight if (coconutWeight > 1.5) { return 0; }

// Assume that the flight velocity of a sparrow is 5 meters per second // and the weight of a coconut is 0.5 ounces return 5 * (1 - (coconutWeight / 0.5)); }

// Example usage: console.log(calculateSparrowFlightVelocity(0.5)); // Output: 4 meters per second console.log(calculateSparrowFlightVelocity(1.0)); // Output: 3 meters per second console.log(calculateSparrowFlightVelocity(1.5)); // Output: 0 meters per second (sparrow cannot carry this weight)

Can any Monty Python fans confirm if this is correct?


I found that understanding compilers was precious at several occasions, especially for cross compilation and embedded programming. Even for Java, for uni assignments, I was completely flabbergasted and at a lost when my IDE stopped working with cryptic messages. Understanding the process of creating object files, what javac does, what goes into a jar helps. As much as understanding what a compiler does, its relationship wtih linkers, how libraries are "added" to an executable (dynamically or statically)

Beside, with ChatGPT in its current state, it's still absolutely impossible to get it to produce any useful code without knowing how code works. There is an advantage to the precise specification of a formal language. At least for the next decade, I think there will be human proof-reading AI output before generated code is ran, with catastrophic consequences when that doesn't happen.

What I've difficulties getting over right now is every time I write something (text or code) I ask myself "Could ChatGPT have written it better?"


Back in grad school I took a robotics course and it involved doing all these big matrix multiplies--by hand given this was pre-MATLAB. The course was overall fun but that part definitely wasn't. A modern version would be more interesting.


Taking a step back: Isn't it kind of daft that we humans must go through years of training to learn esoteric languages and ways of bending our brains backwards in order to tell a computer how to perform arguably simple tasks? In the ideal world, you can tell the computer what you want to happen and... it happens. If not, then you did not explain all your edge cases, and the conversation continues.

I think it can be argued that we have, seemingly overnight, made vast progress towards that world. You specifically mention code, but that concept is irrelevant in this world. Indeed, this is uncharted territory!


> Isn't it kind of daft that we humans must go through years of training to learn esoteric languages and ways of bending our brains backwards in order to tell a computer how to perform arguably simple tasks?

I always took it in another way: learning to program a computer is an enlightening, visceral experience highlighting just how bad natural language and normal modes of thinking suck. They're heavily optimized for reality in which the most important thing was managing social dynamics in small groups, and beliefs were good or bad depending on how close they are to what everyone else beliefs. Whether or not they were true at object level did not matter at all.

We live in different times now. There's a demand for clear, logical, precise thinking. The very thinking we're not well-equipped for by default. The very thinking you call "bending our brains backwards". Because object-level truth matters much more now, there are consequences for getting things wrong - such as countless of ways of getting scammed by people who are better at this than you are.

Not to mention, this "bending our brains backwards" is what gives humanity superpowers. It's what gave us all the technology we enjoy today - energy, transportation, amenities, communication, entertainment. Building and maintaining these things requires precise thinking and precise communications. So, until a super-human AI takes all that over for us (and effectively turns out into NPCs in our own story), glorifying the natural, human mode of communication is just self-handicapping.


> Isn't it kind of daft that we humans must go through years of training to learn esoteric languages

I mean, what do you propose, that we skip all the steps in computer science history and just start at the end? Hardly "daft, it's simply the best way we have come up to provide machines with instructions until now. And it's not like people have not tried other paradigms (ex: graphical programming, "low-code", etc.).

Also, compared to programming in assembly or binary, programming in Python or other high-level languages is a huge advance in itself. Python, at the end of the day, is nothing but a bridge between natural language and machine code.

> You specifically mention code, that concept is irrelevant in this world

Current computer systems run on trillions upon trillions of lines of code.

GPT-3 or Copilot don't change that fact, in fact they will continue to pile up lines of code.

They are systems that map natural language to code, by drawing from an infinitely massive corpus of code.

They bring the idea of expressing systems entirely in natural language one step closer, but it's still very far away - almost a pipe dream.

The output of these innovations is still code. So tell me again how code has become irrelevant, or how people who do not understand code at all will be able to leverage these advances?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: