Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ollama is just a wrapper around llama.cpp, so when the gguf model files come out it'll be able to run on Ollama (assuming no llama.cpp patch is needed, but even if it is ollama is usually good at getting those updates out pretty quickly).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: