Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can actually calculate exact gradients for spiking neurons using the adjoint method: https://arxiv.org/abs/2009.08378 (I'm the second author). In my PhD thesis I show how this can be extended to larger problems and more complicated and biologically plausible neuron models. I agree with the gist of your post though: Retrofitting back propagation (or the adjoint method for that matter) is the wrong approach. One should rather use these methods to optimise biologically plausible learning rules. The group of Wolfgang Maass has done exciting work in that direction (e.g. https://arxiv.org/abs/1803.09574, https://www.frontiersin.org/articles/10.3389/fnins.2019.0048..., https://igi-web.tugraz.at/PDF/256.pdf).


I was aware of Neftci's work, but not your result -- I stand corrected! Given the perspective, given LIF networks are causal systems, of course you can reverse it with sufficient memory. I understand the memory in this case are input synaptic currents at the time of every spike (e.g. what synapses contributed to the spike). This is suspiciously similar to spine and dendritic calcium concentrations. Those variables are usually only stored for a short time - but that said the hippocampus (at least) is adept at reverse replay so there is no reason calcium could not be a proxy for 'adjoint'. hum.

Interesting Maass references too. Cheers


I agree that calcium seems like a natural candidate and I suggest as much in my thesis. Coming from physics, I didn't know about reverse replay in the hippocampus for a long time, but I also have this association now. I would be glad to talk more, is there a way to reach you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: