Hacker Newsnew | past | comments | ask | show | jobs | submit | nextfx's commentslogin

Another case of moving the goalposts until you score a goal.


MCMC is AFAIU overly prohibitive for neural networks. If you are interested in incorporating uncertainty awareness and improving calibration of your neural net classifiers in a somewhat scalable manner, I think linearized Laplace is a good place to look. I'd suggest you look at `laplace-torch` [1], it's an easy way of doing that.

[1]: https://aleximmer.github.io/Laplace/


Never heard of that, I'll take a look thanks. What do you mean by overly prohibitive? I'm not talking about big networks here so running them many times for inference is not a big deal, although I admit I'm not sure what numbers we are talking about. 100s to 1000s of times is feasible though.


I'm not an expert on MCMC for BNNs by any means, but even with small networks I think it's a little tricky to get right. If my memory serves me right, this paper[1] focuses on small networks and goes over the issues and how to get around them.

[1]: https://arxiv.org/abs/2402.01484


I've been using mksquashfs to make full-system backups lately, but had to resort to using unsquashfs for recovering files. This combined with squashfuse might come in handy.


I assume the name is referring to the Nyström low-rank matrix approximation.


Yannic kilchers explanation of the nyströmformer is really good and clear (though it might help to understand transformers to start out with) https://youtu.be/m-zrcmRd7E4


Is there a way of doing NAT hole-punching with Wireguard?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: