Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think anyone knows. I gave it the famous syllogism:

> All men are mortal > Socrates is a man > Is socrates mortal

To which it gave a very detailed and correct reply. I then tried:

> All cats are white > Sam is a cat > Is sam white?

To which it gave an almost identically worded response that was nonsensical.

I personally do not think it is the size of the model in question, it is that the things it does that appear to reflect the output of human cognition are just an echo or reflection. It is not a generalizable solution: there will always be some novel question it is not trained against and for which it will fall down. If you make those vanishingly small, I don’t know, maybe you will have effectively compressed all human knowledge into the model and have a good-enough solution. That’s one way of looking at an NN. But the problem is fundamentally different than chess.

I think this composed with more specialized models for things like identifying and solving math and logic problems could make something that truly represents what I think people are seeing the potential in this. Something that encodes the structure behind these concepts, is extensible, and has a powerful generative function would be really neat.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: