Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many discriminative models converge to same representation space up to a linear transformation. Makes sense that a linear transformation (like PCA) would be able to undo that transformation.

https://arxiv.org/abs/2007.00810

Without properly reading the linked article, if thats all this is, not a particularly new result. Nevertheless this direction of proofs is imo at the core of understanding neural nets.



It's about weights/parameters, not representations.


True, good point, maybe not a straightforward consequence to extend to weights.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: