Knowledge distillation performs partial variance reduction

Safaryan M, Krumes A, Alistarh D-A. 2023. Knowledge distillation performs partial variance reduction. 36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems, NeurIPS, vol. 36.

Download
OA 2023_Neurips_Safaryan.pdf 672.57 KB [Published Version]
Conference Paper | Published | English

Scopus indexed

Corresponding author has ISTA affiliation

Department
Series Title
NeurIPS
Abstract
Knowledge distillation is a popular approach for enhancing the performance of "student" models, with lower representational capacity, by taking advantage of more powerful "teacher" models. Despite its apparent simplicity, the underlying mechanics behind knowledge distillation (KD) are not yet fully understood. In this work, we shed new light on the inner workings of this method, by examining it from an optimization perspective. Specifically, we show that, in the context of linear and deep linear models, KD can be interpreted as a novel type of stochastic variance reduction mechanism. We provide a detailed convergence analysis of the resulting dynamics, which hold under standard assumptions for both strongly-convex and non-convex losses, showing that KD acts as a form of \emph{partial variance reduction}, which can reduce the stochastic gradient noise, but may not eliminate it completely, depending on the properties of the teacher'' model. Our analysis puts further emphasis on the need for careful parametrization of KD, in particular w.r.t. the weighting of the distillation loss, and is validated empirically on both linear models and deep neural networks.
Publishing Year
Date Published
2023-12-15
Proceedings Title
36th Conference on Neural Information Processing Systems
Acknowledgement
MS has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101034413.
Volume
36
Conference
NeurIPS: Neural Information Processing Systems
Conference Location
New Orleans, LA, United States
Conference Date
2023-12-10 – 2023-12-16
ISSN
IST-REx-ID

Cite this

Safaryan M, Krumes A, Alistarh D-A. Knowledge distillation performs partial variance reduction. In: 36th Conference on Neural Information Processing Systems. Vol 36. ; 2023.
Safaryan, M., Krumes, A., & Alistarh, D.-A. (2023). Knowledge distillation performs partial variance reduction. In 36th Conference on Neural Information Processing Systems (Vol. 36). New Orleans, LA, United States.
Safaryan, Mher, Alexandra Krumes, and Dan-Adrian Alistarh. “Knowledge Distillation Performs Partial Variance Reduction.” In 36th Conference on Neural Information Processing Systems, Vol. 36, 2023.
M. Safaryan, A. Krumes, and D.-A. Alistarh, “Knowledge distillation performs partial variance reduction,” in 36th Conference on Neural Information Processing Systems, New Orleans, LA, United States, 2023, vol. 36.
Safaryan M, Krumes A, Alistarh D-A. 2023. Knowledge distillation performs partial variance reduction. 36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems, NeurIPS, vol. 36.
Safaryan, Mher, et al. “Knowledge Distillation Performs Partial Variance Reduction.” 36th Conference on Neural Information Processing Systems, vol. 36, 2023.
All files available under the following license(s):
Creative Commons Attribution 4.0 International Public License (CC-BY 4.0):
Main File(s)
File Name
Access Level
OA Open Access
Date Uploaded
2024-05-22
MD5 Checksum
288c5148a85abf24ad5e22a6b1183655


Export

Marked Publications

Open Data ISTA Research Explorer

Sources

arXiv 2305.17581

Search this title in

Google Scholar