Disagree. The AI we have is very useful for specific things. The pushback you see is not so much denying the milestones that have been surpassed, but rather the milestones that enthusiasts claim are near. And for good reason! Every time and in every field we’ve extrapolated an exponential-looking curve ad infinitum, it’s turned out to be S-shaped, and life goes on.
> We had a threshold for intelligence.
We’ve had many. Computers have surpassed several barriers considered to require intelligence such as arithmetic, guided search like chess computers, etc etc. the Turing test was a good benchmark because of how foreign and strange it was. It’s somewhat true we’re moving the goalposts. But the reason is not stubbornness, but rather that we can’t properly define and subcategorize what reason and intelligence really is. The difficulty to measure something does not mean it doesn’t exist or isn’t important.
Feel free to call it intelligence. But the limitations are staggering, given the advantages LLMs have over humans. They have been trained on all written knowledge that no human could ever come close to. And they still have not come up with anything conceptually novel, such as a new idea or theorem that is genuinely useful. Many people suspect that pattern matching is not the only thing required for intelligent independent thought. Whatever that is!
If you consider that evolution has taken millions of years to produce intelligent humans--that LLM training completed in a manner of months can produce parrots of humans is impressive by itself. Talking with the parrot is almost indistinguishable from talking with a real human.
As far as pattern matching, the difference I see from humans is consciousness. That's probably the main area yet to be solved. All of our current models are static.
Some ideas for where that might be headed:
- Maybe all it takes is to allow an LLM to continuously talk with itself much like how humans have "the milk man's voice".
- Maybe we might need to allow LLMs to update their own weights but that would also require an "objective" which might be hard to encode.
> If you consider that evolution has taken millions of years to produce intelligent humans--that LLM training completed in a manner of months can produce parrots of humans is impressive by itself.
I disagree that such a comparison is useful. Training should be compared to training, and LLM training feeds in so many more words than a baby gets. (A baby has other senses but it's not like feeding in 20 years of video footage is going to make an LLM more competent.)
No, a baby is pre-trained. We know from linguistics that there is a natural language grammar template all humans follow. This template is intrinsic to our biology and is encoded and not learned through observation.
The template is what makes the training process so short for humans. We need minimal data and we can run off of that.
Training is both longer and less effective for the LLM because there is no template.
To give an example suppose it takes just one picture for a human to recognize a dog and it takes 1 million pictures for a ML model to do the same. What I’m saying is that it’s like this because humans come preprogrammed with application specific wetware to do the learning and recognition as a generic operation. That’s why it’s so quick. For AI we are doing it as a one shot operation on something that is not application specific. The training takes longer because of this and is less effective.
I disagree that an LLM has no template, but this is getting away from the point.
Did you look at the post I was replying to? You're talking about LLMs being slower, while that post was impressed by LLMs being "faster".
They're posing it as if LLMs recreate the same templating during their training time, and my core point is disagreeing with that. The two should not be compared so directly.
> It’s somewhat true we’re moving the goalposts. But the reason is not stubbornness, but rather that we can’t properly define and subcategorize what reason and intelligence really is.
Disagree. Intelligence is a word created by humans. The entire concept is made up and defined by humans. It is not some concept that exists outside of that. It is simply a collection of qualities and features we choose to define as a word “intelligent”. The universe doesn’t really have a category or a group of features that is labeled intelligent. Does it use logic? Does it have feelings? Can it talk? Can it communicate? We define the features and we choose to put each and every feature under a category called “intelligence”.
Therefore when we define the “Turing test” as a benchmark for intelligence and we then invalidate it, it is indeed stubbornness and a conscious choice to change a definition of a word we Originally made up in the first place.
What you don’t realize is this entire thing is a vocabulary problem. When we argue what is conscious or what is intelligent we are simply arguing for what features belong in what categories we made up. When the category has blurry or controversial boundaries it’s because we chose the definition to be fuzzy. These are not profound discussions. They are debates about language choice. We are talking About personal definitions and generally accepted definitions both of which are completely chosen and made up by us. It is not profound to talk about things that are simply arbitrary choices picked by humans.
That being said we are indeed changing the goal posts. We are evolving our own chosen definitions and we very well may eventually change the definition of intelligence to never include any form of thinking machine that is artificially created. The reason why we do this is a choice. We are saying, “hey these LLMs are not anything amazing or anything profound. They are not intelligent and I choose to believe this by changing and evolving my own benchmark for what is intelligent.”
Of course this all happens subconsciously based off of deeply rooted instincts and feelings. It’s so deep that it’s really hard to differentiate the instincts between rational thinking. When you think logically, “intelligence” is just a word with an arbitrary definition. An arbitrary category. But the instincts are so strong that you literally spent your entire life thinking that intelligence like god or some other common myth made up by humans is some concept that exists outside of what we make up. It’s human to have these instincts, that’s where religion comes from. What you don’t realize is that it’s those same instincts fueling your definition of what is “intelligent”.
Religious people move the goal posts too. When science establishes things in reality like the helio centricity of the solar system religious people need to evolve their beliefs in order to stay inline with reality. They often do this by reinterpreting the Bible. It’s deeply rooted instincts that prevent us from thinking rationally and it effects the great debate we are having now on “what is intelligence?”.
> We had a threshold for intelligence.
We’ve had many. Computers have surpassed several barriers considered to require intelligence such as arithmetic, guided search like chess computers, etc etc. the Turing test was a good benchmark because of how foreign and strange it was. It’s somewhat true we’re moving the goalposts. But the reason is not stubbornness, but rather that we can’t properly define and subcategorize what reason and intelligence really is. The difficulty to measure something does not mean it doesn’t exist or isn’t important.
Feel free to call it intelligence. But the limitations are staggering, given the advantages LLMs have over humans. They have been trained on all written knowledge that no human could ever come close to. And they still have not come up with anything conceptually novel, such as a new idea or theorem that is genuinely useful. Many people suspect that pattern matching is not the only thing required for intelligent independent thought. Whatever that is!