On a side note, do does it feel to anyone else like this area is a bit of a moving target? Granted this isn't uncommon for topics experiencing a lot of growth, but it's a bit troubling to me that there seems to be a lack of reference publications that experts in the area can all agree on as a starting point of reference. I honestly don't feel like blogs or framework tutorials are a great replacement for this.
Also, I don't know of any other topic area where I would look at a resource that describes fundamental building blocks in an instructive way in 2014 and say "don't read this, it's irrelevant 2-3 years later". For languages/libraries/frameworks, sure. But for basic theory? That strikes me as very alien.
>Also, I don't know of any other topic area where I would look at a resource that describes fundamental building blocks in an instructive way in 2014 and say "don't read this, it's irrelevant 2-3 years later". For languages/libraries/frameworks, sure. But for basic theory?
Yeah. A lot of deep learning papers boil down to "we tried to use X architecture on Y dataset, and it seems to produce small error rate". I don't know of any other area of computer science where getting results from an algorithm without explaining how those results occur is publishable.
Also, I don't know of any other topic area where I would look at a resource that describes fundamental building blocks in an instructive way in 2014 and say "don't read this, it's irrelevant 2-3 years later". For languages/libraries/frameworks, sure. But for basic theory? That strikes me as very alien.