Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

macOS has a different dev culture than Linux, but you can get pretty close if you install the Homebrew package manager. For running LLMs locally I would recommend Ollama (easy) or llama.cpp. Due to the unified memory, you should be able to run larger models than what you can run on a typical consumer grade GPU, but slower.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: