Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you have 64 GB of RAM you should be able to run the 4-bit quantized mlx models, which are specifically for the Apple silicon M chips. https://huggingface.co/collections/mlx-community/qwen3-next-...


Got 32GB so was hoping I could use ollm to offload it to my SSD. Slower but making it possible to run bigger models (in emergencies)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: