Model compression via distillation and quantization

Polino A, Pascanu R, Alistarh D-A. 2018. Model compression via distillation and quantization. 6th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.

Download
OA 2018_ICLR_Polino.pdf 308.34 KB [Published Version]
Conference Paper | Published | English

Scopus indexed
Author
Polino, Antonio; Pascanu, Razvan; Alistarh, Dan-AdrianISTA
Department
Abstract
Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.
Publishing Year
Date Published
2018-05-01
Proceedings Title
6th International Conference on Learning Representations
Conference
ICLR: International Conference on Learning Representations
Conference Location
Vancouver, Canada
Conference Date
2018-04-30 – 2018-05-03
IST-REx-ID

Cite this

Polino A, Pascanu R, Alistarh D-A. Model compression via distillation and quantization. In: 6th International Conference on Learning Representations. ; 2018.
Polino, A., Pascanu, R., & Alistarh, D.-A. (2018). Model compression via distillation and quantization. In 6th International Conference on Learning Representations. Vancouver, Canada.
Polino, Antonio, Razvan Pascanu, and Dan-Adrian Alistarh. “Model Compression via Distillation and Quantization.” In 6th International Conference on Learning Representations, 2018.
A. Polino, R. Pascanu, and D.-A. Alistarh, “Model compression via distillation and quantization,” in 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
Polino A, Pascanu R, Alistarh D-A. 2018. Model compression via distillation and quantization. 6th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.
Polino, Antonio, et al. “Model Compression via Distillation and Quantization.” 6th International Conference on Learning Representations, 2018.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Main File(s)
File Name
Access Level
OA Open Access
Date Uploaded
2020-05-26
MD5 Checksum
a4336c167978e81891970e4e4517a8c3


Export

Marked Publications

Open Data ISTA Research Explorer

Sources

arXiv 1802.05668

Search this title in

Google Scholar