Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're positing the existence of a whole society around Hawking, up to and including a pharmaceutical supply chain, where the correct way to think about it would be Hawking waking up alone on a cat planet. I have no doubt that a complex society of embodied hyperintelligent able-bodied beings could outfox humanity, but that's not what we're talking about with this AI risk scenario.


I see your point, but relative capability levels aren't the only relevant factor here, absolute capabilities matter as well.

It seems plausible to me that even if we are to the AI as cats are to us, we've reached an absolute threshold of generality that allows the AI to be confident in our ability to follow simple (to it) instructions, in a way that cats can't for us.


> but that's not what we're talking about with this AI risk scenario.

Yes, it is.


How is it you write 500,000 words on every topic and then reply with three words like this? Now that you've got it by the tail, channel some of that brevity into your blog!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: