Quantization plays a crucial role in deploying Large Language Models (LLMs) in resource-constrained environments. However, the presence of outlier features significantly hinders low-bit quantization.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results