Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The general rule of thumb is that you should feel free to quantize even as low as 2 bits average if this helps you run a model with more active parameters. Quantized models are not perfect at all, but they're preferable to the models with fewer, bigger parameters. With 8GB usable, you could run models with up to 32B active at heavy quantization.
 help



A large model (100B+, the more the better) may be acceptable at 2-bit quantization, depending on the task. But not a small model. Especially not for technical tasks. On top of that, one still needs room for OS, software and KV cache. 8GB is just not very useful for local LLMs. That said, it can still be entertaining to try out a 4-bit 8B model for the fun of it.

100B+ is the amount of total parameters, whereas what matters here is active - very different for sparse MoE models. You're right that there's some overhead for the OS/software stack but it's not that much. KV-cache is a good candidate for being swapped out, since it only gets a limited amount of writes per emitted token.

Total parameters, not active parameters, is the property that matters for model robustness under extreme quantization.

Once you're swapping from disk, the performance will be quite unusable for most people. And for local inference, KV cache is the worst possible choice to put on disk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: