Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The takeaway I got from OP is that Numenta is promoting more redundancy in those first [n-1] layers than might be typical in (e.g.) existing vision systems. The paper the original post is about doesn't discuss value-of-redundancy in those terms, but suggests that having different modules with similar structure is useful for learning compositional models where objects are composed out of other objects. I'm not really in a position to say how well this aligns with SOTA techniques; intuitively it seems like you could reinterpret the algebra of those modules a few different ways and have the same function, so I'd want to see more proof-of-novelty.

Separately, there's a somewhat dusty area of machine learning called "Boosting" which treats this problem of combining a bunch of different classifiers explicitly. You're exactly right that traditional DNNs can implement a pretty decent approach to boosting, but there are some interesting techniques from that community that don't fit as easily into the standard DNN data graph perspective. For example, check out BrownBoost, which lets you set parameters about believed noise in training data: https://en.wikipedia.org/wiki/BrownBoost



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: