Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My 2c:

I think the key term here is "concept formation" as well as "knowledge representation". How do we form concepts, and how are they represented internally to make them tractable?

Symbols are one way to represent concepts (or rather, point to them). But with symbols we are limited to surface-level transformations according to a syntax (I'm pretty sure Chomsky said something similar?). What do the concepts actually point to, though, and can we represent that underlying structure programmatically?

As I wrote in another comment, I'm very inspired by the conceptual spaces model:

https://mitpress.mit.edu/contributors/peter-gardenfors

https://www.youtube.com/watch?v=Y3_zlm9DrYk

Could someone please steelman my thinking here a bit? Would love to advance my own thinking on this matter.



I used to think ML was missing the ability to formulate abstractions until I read about autoencoders and GAN. If you have not, I suggest looking into it.

In a well designed autoencoder, the network ends up discovering an abstract representation of inputs and a conceptual space to express it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: