The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models

Kurtic E, Campos D, Nguyen T, Frantar E, Kurtz M, Fineran B, Goin M, Alistarh D-A. 2022. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. EMNLP: Conference on Empirical Methods in Natural Language Processing, 4163–4181.

Download
OA 2022_EMNLP_Kurtic.pdf 522.56 KB [Published Version]

Conference Paper | Published | English

Scopus indexed
Author
Kurtic, EldarISTA; Campos, Daniel; Nguyen, Tuan; Frantar, EliasISTA; Kurtz, Mark; Fineran, Benjamin; Goin, Michael; Alistarh, Dan-AdrianISTA
Department
Abstract
In this paper, we consider the problem of sparsifying BERT models, which are a key building block for natural language processing, in order to reduce their storage and computational cost. We introduce the Optimal BERT Surgeon (oBERT), an efficient and accurate pruning method based on approximate second-order information, which we show to yield state-of-the-art results in both stages of language tasks: pre-training and fine-tuning. Specifically, oBERT extends existing work on second-order pruning by allowing for pruning weight blocks, and is the first such method that is applicable at BERT scale. Second, we investigate compounding compression approaches to obtain highly compressed but accurate models for deployment on edge devices. These models significantly push boundaries of the current state-of-the-art sparse BERT models with respect to all metrics: model size, inference speed and task accuracy. For example, relative to the dense BERT-base, we obtain 10x model size compression with < 1% accuracy drop, 10x CPU-inference speedup with < 2% accuracy drop, and 29x CPU-inference speedup with < 7.5% accuracy drop. Our code, fully integrated with Transformers and SparseML, is available at https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT.
Publishing Year
Date Published
2022-12-01
Proceedings Title
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Page
4163-4181
Conference
EMNLP: Conference on Empirical Methods in Natural Language Processing
Conference Location
Abu Dhabi, United Arab Emirates
Conference Date
2022-12-07 – 2022-12-11
IST-REx-ID

Cite this

Kurtic E, Campos D, Nguyen T, et al. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics; 2022:4163-4181. doi:10.18653/v1/2022.emnlp-main.279
Kurtic, E., Campos, D., Nguyen, T., Frantar, E., Kurtz, M., Fineran, B., … Alistarh, D.-A. (2022). The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 4163–4181). Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.emnlp-main.279
Kurtic, Eldar, Daniel Campos, Tuan Nguyen, Elias Frantar, Mark Kurtz, Benjamin Fineran, Michael Goin, and Dan-Adrian Alistarh. “The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.” In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 4163–81. Association for Computational Linguistics, 2022. https://doi.org/10.18653/v1/2022.emnlp-main.279.
E. Kurtic et al., “The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, United Arab Emirates, 2022, pp. 4163–4181.
Kurtic E, Campos D, Nguyen T, Frantar E, Kurtz M, Fineran B, Goin M, Alistarh D-A. 2022. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. EMNLP: Conference on Empirical Methods in Natural Language Processing, 4163–4181.
Kurtic, Eldar, et al. “The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.” Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2022, pp. 4163–81, doi:10.18653/v1/2022.emnlp-main.279.
All files available under the following license(s):
Creative Commons Attribution 4.0 International Public License (CC-BY 4.0):
Main File(s)
File Name
Access Level
OA Open Access
Date Uploaded
2024-07-31
MD5 Checksum
c47b9edd8a9f743ac77a593de6d2e84a


Export

Marked Publications

Open Data ISTA Research Explorer

Sources

arXiv 2203.07259

Search this title in

Google Scholar