Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know it's only shared system RAM and not VRAM, but the M5's 150GB/s isn't going to be very fast when doing AI inference. A fairly old rtx 3060 12GB does 360GB/s. But I guess quantity is a quality all of it's own when it comes to RAM and inference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: