Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the goal posts keep moving, and that if you showed a person from 2019 the current GPT (even GPT-4o), most people would conclude that it's AGI (but that it probably needs fine-tuning on a bunch of small tasks).

So basically, I don't even believe in AGI. Either we have it, relative to how we would have described it, or it's a goal post that keeps moving that we'll never reach.



Generative AI is nowhere close to AGI (formerly known as AI). It’s a neat parlour trick which has targeted the weaknesses of human beings in judging quality (e.g. text which is superficially convincing but wrong, portraits with six fingers). About the only useful application I can think of at present is summarising long text. Machine learning has been far more genuinely useful.

Perhaps it will evolve into something useful but at present it is nowhere near independent intelligence which can reason about novel problems (as opposed to regurgitate expected answers). On top of that Sam Altman in particular is a notoriously untrustworthy and unreliable carnival barker.


Nah AGI is supposed to know that 9.9 > 9.11

That's a pretty fundamental level of base reasoning that any truly general intelligence would require. To be general it needs to apply to our world, not to our pseudo-linguistic reinterpretation of the world.


I just asked gpt-4o:

“9.9 is larger than 9.11.

This is because 9.9 is equivalent to 9.90, and comparing 9.90 to 9.11, it’s clear that 90 is greater than 11 in the decimal place.”


Sorry you’re incorrect

Exodus 9.9 is less than Exodus 9.11.

Linux 9.9 is less than Linux 9.11


Context is important, but the LLM should assume what any human would, the math version, not the version numbering.


To play devil's advocate, do optical illusions disprove human intelligence?


They're called illusion because we can actually think about it and realize that our immediate reaction is wrong, which LLMs cannot reliably do


If you showed a layman a Theranos demo in 2010, they would conclude it's a revolutionary machine too. It certainly gave out some numbers. That doesn't mean the tech was any good when little issues like accuracy matter.

LLMs are really only passable when either the topic is trivial, with thousands of easily Googleable public answers, or when you yourself aren't familiar with the topic, meaning it just needs to be plausible enough to stand up to a cursory inspection. For anything that requires actually integrating/understanding information on a topic where you can call bull, they fall apart. That is also how human bullshit artists work. The "con" in "conman" stands for "confidence", which can mask but not stand in for a lack of substance.


> I think the goal posts keep moving, and that if you showed a person from 2019 the current GPT (even GPT-4o), most people would conclude that it's AGI

Yes, if you just showed them a demo its super impressive and looks like an AGI. If you let a lawyer, doctor or even a programmer actually work deeply with it for a couple of months I don't think they would call it AGI, whatever your definition of AGI is. It's a super helpful tool with remarkable capabilities but non factuality, no memory, little reasoning and occasional hallucinations make it unreliable and therefore non AGI imo.


Couple of months?!? Couple of minutes more like.

It still goes round in circles and makes things up (which it later "knows" the right answer to).

None of that is anywhere near AGI as it's not general intelligence.


There's definitely goalpost-moving by detractors; I've been guilty of it myself. A tendency worth recognizing and countering.

But if we're already at your AGI goalpost, I think you could stand to move it quite a ways the other direction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: