Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the other hand, I disagree - I believe this approach is unlikely to lead to human-level AGI.

You might not be fooled by a conversation with an agent like the one in the promo video, but you'd probably agree that somewhere around 80% of people could be. At what percentage would you say that it's good enough to be "human-level?"



When people talk about human-level AGI, they are not referring to an AI that could pass as a human to most people - that is, they're not simply referring to a program that can pass the Turing test.

They are referring to an AI that can use reasoning, deduction, logic, and abstraction like the smartest humans can, to discover, prove, and create novel things in every realm that humans can: math, physics, chemistry, biology, engineering, art, sociology, etc.


The framing of the question admits only one reasonable answer: There is no such threshold. Fooling people into believing something doesn't make it so.


Most peoples interactions are transactional. When I call into a company and talk to an agent, and that agent solves the problem I have regardless of if the agent is a person or an AI, where did the fooling occur? The ability to problem solve based on context is intelligence.


What criteria do you suggest, then?

As has been suggested, the models will get better at a faster rate than humans will get smarter.


> You might not be fooled by a conversation with an agent like the one in the promo video, but you'd probably agree that somewhere around 80% of people could be.

I think people will quickly learn with enough exposure, and then that percentage will go down.


Nah– These models will improve faster than people can catch up. People or AI models can barely catch AI-created text. It's quickly becoming impossible to distinguish.

The one you catch is the tip of the iceberg.

Same will happen to speech. Might take a few years, but it'll be indistinguishable in a max a few years. Due to compute increase + model improvement, both improving exponentially.


How can we be so sure things will keep getting better? And at a rate faster than humans can adapt?

If we have to damn rivers and build new coal plants to power these AI data centers, then it may be one step forward and two steps back.


> These models will improve faster than people can catch up.

So that we're all clear the basis for this analysis is purely made up, yes?


No, instead something worse will happen.

Well spoken and well mannered speakers will be called bots. The comment threads under posts will be hurtling insults back and forth on who's actually real. Half the comments will actually be bots doing it. Welcome to the dead internet.


Right! This is absolutely apocalyptic! If more than half the people I argue with on internet forums are just bots that don't feel the sting and fail to sleep at night because of it, what even is the meaning of anything?

We need to stop these hateful ai companies before they ruin society as a whole!

Seriously though... the internet is dead already, and it's not coming back to what it was. We ruined it, not ai.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: