conference paper
L-GreCo: Layerwise-adaptive gradient compression for efficient data-parallel deep learning
published
yes
Ilia
Markov
author D0CF4148-C985-11E9-8066-0BDEE5697425
Kaveh
Alimohammadi
author
Elias
Frantar
author 09a8f98d-ec99-11ea-ae11-c063a7b7fe5f
Dan-Adrian
Alistarh
author 4A899BFC-F248-11E8-B48F-1D18A9856A870000-0003-3650-940X
P.Gibbons
editor
G.Pekhimenko
editor
C.De Sa
editor
DaAl
department
MLSys: Machine Learning and Systems
Data-parallel distributed training of deep neural networks (DNN) has gained very widespread adoption, but can still experience communication bottlenecks. To address this issue, entire families of compression mechanisms have been developed, including quantization, sparsification, and low-rank approximation, some of which are seeing significant practical adoption. Despite this progress, almost all known compression schemes apply compression uniformly across DNN layers, although layers are heterogeneous in terms of parameter count and their impact on model accuracy.In this work, we provide a general framework for adapting the degree of compression across the model's layers dynamically during training, improving the overall compression, while leading to substantial speedups, without sacrificing accuracy. Our framework, called L-GreCo, is based on an adaptive algorithm, which automatically picks the optimal compression parameters for model layers guaranteeing the best compression ratio while satisfying an error constraint. Extensive experiments over image classification and language modeling tasks shows that L-GreCo is effective across all existing families of compression methods, and achieves up to 2.5
×
training speedup and up to 5
×
compression improvement over efficient implementations of existing approaches, while recovering full accuracy. Moreover, L-GreCo is complementary to existing adaptive algorithms, improving their compression ratio by 50\% and practical throughput by 66\%. An anonymized implementation is available at https://github.com/LGrCo/L-GreCo.
Association for Computing Machinery2024Athens, Greece
eng
Proceedings of Machine Learning and Systems
2210.17357
6
https://research-explorer.ista.ac.at/record/17490
I. Markov, K. Alimohammadi, E. Frantar, D.-A. Alistarh, in:, P. Gibbons, G. Pekhimenko, C. De Sa (Eds.), Proceedings of Machine Learning and Systems , Association for Computing Machinery, 2024.
Markov I, Alimohammadi K, Frantar E, Alistarh D-A. L-GreCo: Layerwise-adaptive gradient compression for efficient data-parallel deep learning. In: Gibbons P, Pekhimenko G, De Sa C, eds. <i>Proceedings of Machine Learning and Systems </i>. Vol 6. Association for Computing Machinery; 2024.
Markov, Ilia, Kaveh Alimohammadi, Elias Frantar, and Dan-Adrian Alistarh. “L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient Data-Parallel Deep Learning.” In <i>Proceedings of Machine Learning and Systems </i>, edited by P. Gibbons, G. Pekhimenko, and C. De Sa, Vol. 6. Association for Computing Machinery, 2024.
I. Markov, K. Alimohammadi, E. Frantar, and D.-A. Alistarh, “L-GreCo: Layerwise-adaptive gradient compression for efficient data-parallel deep learning,” in <i>Proceedings of Machine Learning and Systems </i>, Athens, Greece, 2024, vol. 6.
Markov, Ilia, et al. “L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient Data-Parallel Deep Learning.” <i>Proceedings of Machine Learning and Systems </i>, edited by P. Gibbons et al., vol. 6, Association for Computing Machinery, 2024.
Markov, I., Alimohammadi, K., Frantar, E., & Alistarh, D.-A. (2024). L-GreCo: Layerwise-adaptive gradient compression for efficient data-parallel deep learning. In P. Gibbons, G. Pekhimenko, & C. De Sa (Eds.), <i>Proceedings of Machine Learning and Systems </i> (Vol. 6). Athens, Greece: Association for Computing Machinery.
Markov I, Alimohammadi K, Frantar E, Alistarh D-A. 2024. L-GreCo: Layerwise-adaptive gradient compression for efficient data-parallel deep learning. Proceedings of Machine Learning and Systems . MLSys: Machine Learning and Systems vol. 6.
174562024-08-22T08:29:25Z2024-09-13T11:57:54Z