Extreme compression of large language models via additive quantization
Egiazarian V, Panferov A, Kuznedelev D, Frantar E, Babenko A, Alistarh D-A. 2024. Extreme compression of large language models via additive quantization. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 12284–12303.
Download (ext.)
https://doi.org/10.48550/arXiv.2401.06118
[Preprint]
Conference Paper
| Published
| English
Scopus indexed
Author
Egiazarian, Vage;
Panferov, AndreiISTA;
Kuznedelev, Denis;
Frantar, EliasISTA;
Babenko, Artem;
Alistarh, Dan-AdrianISTA
Corresponding author has ISTA affiliation
Department
Series Title
PMLR
Abstract
The emergence of accurate open large language models (LLMs) has led to a race towards performant quantization techniques which can enable their execution on end-user devices. In this paper, we revisit the problem of “extreme” LLM compression—defined as targeting extremely low bit counts, such as 2 to 3 bits per parameter—from the point of view of classic methods in Multi-Codebook Quantization (MCQ). Our algorithm, called AQLM, generalizes the classic Additive Quantization (AQ) approach for information retrieval to advance the state-of-the-art in LLM compression, via two innovations: 1) learned additive quantization of weight matrices in input-adaptive fashion, and 2) joint optimization of codebook parameters across each transformer blocks. Broadly, AQLM is the first scheme that is Pareto optimal in terms of accuracy-vs-model-size when compressing to less than 3 bits per parameter, and significantly improves upon all known schemes in the extreme compression (2bit) regime. In addition, AQLM is practical: we provide fast GPU and CPU implementations of AQLM for token generation, which enable us to match or outperform optimized FP16 implementations for speed, while executing in a much smaller memory footprint.
Publishing Year
Date Published
2024-09-01
Proceedings Title
Proceedings of the 41st International Conference on Machine Learning
Publisher
ML Research Press
Acknowledgement
Authors would like to thank Ruslan Svirschevski for his help in solving technical issues with AQLM and baselines. We also thank Tim Dettmers for helpful discussions on the structure of weights in modern LLMs and size-accuracy trade-offs. The authors would also like to thank Daniil Pavlov for his assistance with CPU benchmarking. Finally, authors would like to thank the communities of ML enthusiasts known as LocalLLaMA5 and Petals community on discord6
for the crowd wisdom about running LLMs on consumer devices. Egiazarian Vage and Denis Kuznedelev and Andrei Panferov were supported by the grant for research centers in the field of AI provided by the Analytical Center for the Government of the Russian Federation (ACRF) in
accordance with the agreement on the provision of subsidies (identifier of the agreement 000000D730321P5Q0002) and the agreement with HSE University No. 70-2021-00139.
Volume
235
Page
12284-12303
Conference
ICML: International Conference on Machine Learning
Conference Location
Vienna, Austria
Conference Date
2024-07-21 – 2024-07-27
eISSN
IST-REx-ID
Cite this
Egiazarian V, Panferov A, Kuznedelev D, Frantar E, Babenko A, Alistarh D-A. Extreme compression of large language models via additive quantization. In: Proceedings of the 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:12284-12303.
Egiazarian, V., Panferov, A., Kuznedelev, D., Frantar, E., Babenko, A., & Alistarh, D.-A. (2024). Extreme compression of large language models via additive quantization. In Proceedings of the 41st International Conference on Machine Learning (Vol. 235, pp. 12284–12303). Vienna, Austria: ML Research Press.
Egiazarian, Vage, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, and Dan-Adrian Alistarh. “Extreme Compression of Large Language Models via Additive Quantization.” In Proceedings of the 41st International Conference on Machine Learning, 235:12284–303. ML Research Press, 2024.
V. Egiazarian, A. Panferov, D. Kuznedelev, E. Frantar, A. Babenko, and D.-A. Alistarh, “Extreme compression of large language models via additive quantization,” in Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 2024, vol. 235, pp. 12284–12303.
Egiazarian V, Panferov A, Kuznedelev D, Frantar E, Babenko A, Alistarh D-A. 2024. Extreme compression of large language models via additive quantization. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 12284–12303.
Egiazarian, Vage, et al. “Extreme Compression of Large Language Models via Additive Quantization.” Proceedings of the 41st International Conference on Machine Learning, vol. 235, ML Research Press, 2024, pp. 12284–303.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Link(s) to Main File(s)
Access Level
Open Access
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 2401.06118