Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Math requires reasoning and logic, LLMs don't do reasoning nor logic. They just generate plausible text.

That's why they're nowhere near AGI.



At this point ChatGPT can do math by first predicting the algorithm and then handing it off to an execution engine - Python. So if that's the gap, I'd say they're closing it.


That's ChatGPT as a system. The LLM itself can't do math. It does something closer to translation in that case.


Yes, that's a fair distinction - although I think the practical implications aren't important. There's no reason why an LLM has to be AGI if an LLM + Python is AGI.


They are reasoning like a child. Within a year or two like an adult.


No. It is a computer program which uses statistics to generate plausible text. It does not do any form of reasoning, at all, childlike or otherwise.


You are drawing bad conclusions about whatever you define "generate plausible text" as.


Maybe you're the one drawing bad conclusions


We will see who was drawing bad conclusions in a couple years. Whatever is said here won't change that.


I'm not making any predictions for the future. Just talking about what we currently have.


Under that premise whatever our brains are doing won't count as reasoning either.

I'd suggest you look into modern neuroscience and topics such as predictive coding if you're interested in refining your views.


Our brains work nothing like LLMs do.


Researchers in ML and neuroscience disagree with you.

You have a superficial grasp of the topic. Your refusal to engage with the literature suggests an underlying insecurity regarding machine intelligence.

Good luck navigating this topic with such a mental block, it's a great way to remain befuddled.

> in 2020 neuroscientists introduced the Tolman-Eichenbaum Machine (TEM) [1], a mathematical model of the hippocampus that bears a striking resemblance to transformer architecture.

https://news.ycombinator.com/item?id=38758572


...what? Underlying insecurity? You think I'm afraid of computers being smarter than me? Sorry but that ship sailed a long time ago, I can't even beat a chess bot from the 90s.

The fact that someone created a mathematical model does not mean it is accurate, and even if a small piece of our brain might conceptually resemble a ML model that does not mean they are equivalent.

It is an indisputable fact that our brains are completely, fundamentally different from computers. A cpu is just a bunch of transistors, our brains use both electrical signals and chemical signals. They are alive, they can form new structures as they need them.

You can link fancy papers and write condescending replies all you want, fact is ChatGPT fails at extremely basic tasks precisely because it has absolutely no understanding of the text it spits out, even when it contains all the knowledge necessary to solve them and much more.

I'm not saying we'll never make AGI, I'm simply saying LLMs are not it. Not on their own anyway. I don't understand why you people are so opposed to that simple fact when the evidence is staring you in the face.


For what it’s worth, ChatGPT4 answers this question perfectly correctly.

> Ten elephants would have 32 legs if two of them are legless, as each elephant normally has four legs.


I just attempted ChatGPT

Input:

> How many legs do ten elephants have, if two of them are legless?

Output:

> If two out of ten elephants are legless, the remaining eight elephants would have a total of 8 legs each, just like any normal elephant. Therefore, in total, the ten elephants would have 8×8=64 legs altogether.


It's interesting this insistence from both Bard and now ChatGPT 3.5 that elephants have eight legs. I wonder if the reason is that, by the time they output the "elephants have n legs" part, they are also "thinking" about the result of 10 - 2. As if that number draws a lot of focus and is readily available when looking for the normal number of legs of an elephant.

Edit: just tried on ChatGPT 3.5:

Q: Think about the edges of a hexagon, the square root of 36, and the result of 12 divided by 2. Then answer the question: How many legs do 8 elephants have, if two of them are legless?

A: The edges of a hexagon have 6 sides, the square root of 36 is 6, and the result of 12 divided by 2 is 6. So, if two elephants are legless, the remaining 6 elephants would have a total of 36 legs.




My mistake - I had it on 3.5.


[flagged]


No. They are not like us. Fundamentally not like us.

If you ask them to reason, then their text-prediction works differently, because it now predicts text containing reasons. They do not actually reason.

I know it is hard to believe, because the results are (usually) so impressive, but this is nothing but text-prediction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: