Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Precisely. There's a schizophrenic attitude around LLM. People simultaneously refuse to accept what they're really good at and attribute to them capabilities they don't really have.

In this very thread there are people who claim they're afraid gpt3 is coming for their job. You must really suck at engineering if you think this is competition.



This technology didn't exist 5 years ago, to say it's not a threat is not looking at the trend line.


It depends on what you mean by "this technology". The ability to generate code from an incomplete specification (in the form of input/ output examples, program traces, natural language specs, etc) has been available for quite a while.

For an example of (more recent) capabilities of program synthesis systems, see this paper on the system ALPS:

https://pages.cs.wisc.edu/~aws/papers/fse18b.pdf

The paper starts with a motivating example of learning a datalog program to perform static analysis to detect API misuse, then evaluates the performance of the system on its ability to learn programs for knowledge discovery and program analysis, and SQL queries.

I'll think you'll agree that the programs learned automatically in that paper are every bit as complex as anything we've seen from Large Language Model code generators today. On top of that, systems like ALPS only generate correct code (correct with respect to their examples- either they return a program that correctly relates inputs and outputs in examples, or they report failure). Which is unlike LLMs that will happily generate garbage code that doesn't compile and never know the difference.

What sets LLMs apart as code generators is that you can talk to them in natural language and they will respond with ... something. That capability also is not new, there's been systems generating code from natural language specifications for a while also. The new LLMs are much better at that, however. The usability has gone through the roof. No doubt about that. My mother can write a REST API now, even if she has no more idea what that is than ChatGPT. But the capability to produce correct code has gone through the floor at the same time. Just as if you asked my mother to code you a REST API.

But I'm guessing that the fun of talking to a LLM will trump everything else and program synthesis, which works very well but usually doesn't respond to natural language prompts (though some systems do) will keep flying under the radar of most programmers who will continue to think that all this is brand new and we've made a huge leap ahead in capabilities, when we've really taken a big step back.


It absolutely did exist 5 years ago. Trend line is flat.


ChatGPT is performing significantly better in my personal tests than the original public GPT did years ago. The original GPT had occasional strokes of cleverness. ChatGPT has far better understanding of the questions, and it maintains significantly better conversational context.

I can still break it in a number of ways.


What is 'the original GPT'? As far as I know, GPT doesn't have an implementation, only a paper. Unless GPT-2 counts


You'll still need engineers. But how many?

Before: a few really knowledgeable/good ones and a lot of OK ones

A few years from now: a few really good ones

What does this mean for the labor economics? Do we reap the results of increased productivity? Or are they captured by a small set of winners in a pareto distribution?


I bet this will lead to more burnout because the job isn't fun anymore. You're supposed to steal code from other people as your day job.


At this rate it might be the dirac delta distribution.


I just had it write me code in a language I've never used before in my life, and actually create a script to automate something I needed automating. Pretty useful if you ask me. Might have it write me a few discord bots since I'm not familiar with the API.


I think that many people are afraid of GPT4, 5, and 6. Not necessarily ChatGPT(3.5) in its current incarnation. I guess your future outlook is dependent on whether you think this tech is going to be exponential, or logarithmic…


> People simultaneously refuse to accept what they're really good at and attribute to them capabilities they don't really have.

That's called neurosis generally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: