Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

8G VRAM for LLM, are you sure? I thought you need way more, 20GB++ Nvidia doesn't want peasants running own LLMs locally, 90% of their business is supporting AI bubble with a lot of GPU datacenters


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: