Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Trivially false, which you could verify with a 5-second Google search ("eliezer yudkowsky publications"): https://www.semanticscholar.org/author/Eliezer-Yudkowsky/254...


I'd not call any of those rambling messes publications

>By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: "A curious aspect of the theory of evolution is that everybody thinks he understands it." (Monod 1974.) My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard; as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing.

???




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: