Bias in pruned vision models: In-depth analysis and countermeasures

Iofinova EB, Krumes A, Alistarh D-A. 2023. Bias in pruned vision models: In-depth analysis and countermeasures. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR: Conference on Computer Vision and Pattern Recognition, 24364–24373.

Download (ext.)

Conference Paper | Published | English

Corresponding author has ISTA affiliation

Abstract
Pruning—that is, setting a significant subset of the parameters of a neural network to zero—is one of the most popular methods of model compression. Yet, several recent works have raised the issue that pruning may induce or exacerbate bias in the output of the compressed model. Despite existing evidence for this phenomenon, the relationship between neural network pruning and induced bias is not well-understood. In this work, we systematically investigate and characterize this phenomenon in Convolutional Neural Networks for computer vision. First, we show that it is in fact possible to obtain highly-sparse models, e.g. with less than 10% remaining weights, which do not decrease in accuracy nor substantially increase in bias when compared to dense models. At the same time, we also find that, at higher sparsities, pruned models exhibit higher uncertainty in their outputs, as well as increased correlations, which we directly link to increased bias. We propose easy-to-use criteria which, based only on the uncompressed model, establish whether bias will increase with pruning, and identify the samples most susceptible to biased predictions post-compression. Our code can be found at https://github.com/IST-DASLab/pruned-vision-model-bias.
Publishing Year
Date Published
2023-08-22
Proceedings Title
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Publisher
IEEE
Acknowledgement
The authors would like to sincerely thank Sara Hooker for her feedback during the development of this work. EI was supported in part by the FWF DK VGSCO, grant agreement number W1260-N35. AP and DA acknowledge generous ERC support, via Starting Grant 805223 ScaleML.
Page
24364-24373
Conference
CVPR: Conference on Computer Vision and Pattern Recognition
Conference Location
Vancouver, BC, Canada
Conference Date
2023-06-17 – 2023-06-24
eISSN
IST-REx-ID

Cite this

Iofinova EB, Krumes A, Alistarh D-A. Bias in pruned vision models: In-depth analysis and countermeasures. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE; 2023:24364-24373. doi:10.1109/cvpr52729.2023.02334
Iofinova, E. B., Krumes, A., & Alistarh, D.-A. (2023). Bias in pruned vision models: In-depth analysis and countermeasures. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 24364–24373). Vancouver, BC, Canada: IEEE. https://doi.org/10.1109/cvpr52729.2023.02334
Iofinova, Eugenia B, Alexandra Krumes, and Dan-Adrian Alistarh. “Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures.” In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 24364–73. IEEE, 2023. https://doi.org/10.1109/cvpr52729.2023.02334.
E. B. Iofinova, A. Krumes, and D.-A. Alistarh, “Bias in pruned vision models: In-depth analysis and countermeasures,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 2023, pp. 24364–24373.
Iofinova EB, Krumes A, Alistarh D-A. 2023. Bias in pruned vision models: In-depth analysis and countermeasures. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR: Conference on Computer Vision and Pattern Recognition, 24364–24373.
Iofinova, Eugenia B., et al. “Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures.” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2023, pp. 24364–73, doi:10.1109/cvpr52729.2023.02334.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Web of Science

View record in Web of Science®

Sources

arXiv 2304.12622

Search this title in

Google Scholar