Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are a lot of variables here such as your hardware's memory bandwidth, speed at which at processes tensors etc.

A basic thing to remember: Any given dense model would require X GB of memory at 8-bit quantization, where X is the number of params (of course I am simplifying a little by not counting context size). Quantization is just 'precision' of the model, 8-bit generally works really well. Generally speaking, it's not worth even bothering with models that have more param size than your hardware's VRAM. Some people try to get around it by using 4-bit quant, trading some precision for half VRAM size. YMMV depending on use-case



4 bit is absolutely fine.

I know this is crazy to here because the big iron folks still debate 16 vs 32 and 8 vs 16 is near verboten in public conversation.

I contribute to llama.cpp and have seen many many efforts to measure evaluation perf of various quants, and no matter which way it was sliced (ranging from subjective volunteers doing A/B voting on responses over months, to objective object perplexity loss) Q4 is indistinguishable from the original.


It's incredibly niche, but Gemma 3 27b can recognize a number of popular video game characters even in novel fanart (I was a little surprised at that when messing around with its vision). But the Q4 quants, even with QAT, are very likely to name a random wrong character from within the same franchise, even when Q8 quants name the correct character.

Niche of a niche, but just kind of interesting how the quantization jostles the name recall.


Vision models do degrade more with quantization. https://unsloth.ai/blog/dynamic-4bit


> 4 bit is absolutely fine.

For larger models.

For smaller models, about 12B and below, there is a very noticeable degradation.

At least that's my experience generating answers to the same questions across several local models like Llama 3.2, Granite 3.1, Gemma2 etc and comparing Q4 against Q8 for each.

The smaller Q4 variants can be quite useful, but they consistently struggle more with prompt adherence and recollection especially.

Like if you tell it to generate some code without explaining the generated code, a smaller Q4 is significantly more likely to explain the code regardless, compared to Q8 or better.


4 bit is fine conditional to the task. This condition is related to the level of nuance in understanding required for the response to be sensible.

All the models I have explored seem to capture nuance in understanding in the floats. It makes sense, as initially it will regress to the mean and slowly lock in lower and lower significance figures to capture subtleties and natural variance in things.

So, the further you stray from average conversation, the worse a model will do, as a function of it's quantisation.

So, if you don't need nuance, subtly, etc. say for a document summary bot for technical things, 4 bit might genuinely be fine. However, if you want something that can deal with highly subjective material where answers need to be tailored to a user, using in-context learning of user preferences etc. then 4 bit tends to struggle badly unless the user aligns closely with the training distribution's mean.


Just for some callibration: approx. no one runs 32 bit for LLMs on any sort of iron, big or otherwise. Some models (eg DeepSeek V3, and derivatives like R1) are native FP8. FP8 was also common for llama3 405b serving.


> 8 vs 16 is near verboten in public conversation.

i mean, deepseek is fp8


Not only that, but the 1.58 bit Unsloth dynamic quant is uncannily powerful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: