Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get why they are calling out vs M1/Intel as this is primarily targeted at getting folks to upgrade but it is kind of annoying that they aren't emphasizing the incremental vs the last generation. Also, the callout to AI developers to get an ok GPU but with 128 GB of unified RAM is pretty smart.


Agreed, though they at least put M2 comps on screen in most of the places they did the M1 comps.


Will these M3 chips be competitive against cheap Nvidia cards for training small and medium sized networks?

M2 weren't, and Metal support on Torch was sketchy. Getting better lately, though.


For training you want the best NVIDIA card you can afford. Doesn't make much sense to use a laptop for training IMHO. There is an argument that the M3 Max is the best non-datacenter chip for inference with the ability to scale to 128 GB of memory.


Sure, I am thinking a bit ahead. That is, a Mac Mini / Studio / Pro with a M3 Max / Ultra could be interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: