10 Publications

Mark all

[10]
2025 | Published | Conference Paper | IST-REx-ID: 20684 | OA
Kurtic E, Marques A, Pandit S, Kurtz M, Alistarh D-A. “Give me BF16 or give me death”? Accuracy-performance trade-offs in LLM quantization. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics; 2025:26872-26886.
[Published Version] View | Files available | arXiv
 
[9]
2025 | Published | Conference Paper | IST-REx-ID: 20820 | OA
Sieberling O, Kuznedelev D, Kurtic E, Alistarh D-A. EvoPress: Accurate dynamic model compression via evolutionary search. In: 42nd International Conference on Machine Learning. Vol 267. ML Research Press; 2025:55556-55590.
[Published Version] View | Files available | arXiv
 
[8]
2025 | Published | Book Chapter | IST-REx-ID: 21257 | OA
Kurtic E, Kuznedelev D, Frantar E, et al. Sparse Fine-Tuning for Inference Acceleration of Large Language Models. In: Passban P, Way A, Rezagholizadeh M, eds. Enhancing LLM Performance. Efficacy, Fine-Tuning, and Inference Techniques. Springer Nature; 2025:83-97. doi:10.1007/978-3-031-85747-8_6
[Preprint] View | DOI | Download Preprint (ext.) | arXiv
 
[7]
2024 | Published | Conference Paper | IST-REx-ID: 18975 | OA
Modoranu I-V, Kalinov A, Kurtic E, Frantar E, Alistarh D-A. Error feedback can accurately compress preconditioners. In: 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:35910-35933.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[6]
2024 | Published | Conference Paper | IST-REx-ID: 19510 | OA
Modoranu I-V, Safaryan M, Malinovsky G, et al. MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
[5]
2024 | Published | Conference Paper | IST-REx-ID: 15011 | OA
Kurtic E, Hoefler T, Alistarh D-A. How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In: Proceedings of Machine Learning Research. Vol 234. ML Research Press; 2024:542-553.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[4]
2023 | Published | Conference Paper | IST-REx-ID: 14460 | OA
Nikdan M, Pegolotti T, Iofinova EB, Kurtic E, Alistarh D-A. SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:26215-26227.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[3]
2023 | Published | Conference Paper | IST-REx-ID: 13053 | OA
Krumes A, Vladu A, Kurtic E, Lampert C, Alistarh D-A. CrAM: A Compression-Aware Minimizer. In: 11th International Conference on Learning Representations . OpenReview; 2023.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[2]
2022 | Published | Conference Paper | IST-REx-ID: 17088 | OA
Kurtic E, Campos D, Nguyen T, et al. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics; 2022:4163-4181. doi:10.18653/v1/2022.emnlp-main.279
[Published Version] View | Files available | DOI | arXiv
 
[1]
2021 | Published | Conference Paper | IST-REx-ID: 11463 | OA
Frantar E, Kurtic E, Alistarh D-A. M-FAC: Efficient matrix-free approximations of second-order information. In: 35th Conference on Neural Information Processing Systems. Vol 34. Neural Information Processing Systems Foundation; 2021:14873-14886.
[Published Version] View | Download Published Version (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: AMA

Export / Embed

Grants


10 Publications

Mark all

[10]
2025 | Published | Conference Paper | IST-REx-ID: 20684 | OA
Kurtic E, Marques A, Pandit S, Kurtz M, Alistarh D-A. “Give me BF16 or give me death”? Accuracy-performance trade-offs in LLM quantization. In: Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics; 2025:26872-26886.
[Published Version] View | Files available | arXiv
 
[9]
2025 | Published | Conference Paper | IST-REx-ID: 20820 | OA
Sieberling O, Kuznedelev D, Kurtic E, Alistarh D-A. EvoPress: Accurate dynamic model compression via evolutionary search. In: 42nd International Conference on Machine Learning. Vol 267. ML Research Press; 2025:55556-55590.
[Published Version] View | Files available | arXiv
 
[8]
2025 | Published | Book Chapter | IST-REx-ID: 21257 | OA
Kurtic E, Kuznedelev D, Frantar E, et al. Sparse Fine-Tuning for Inference Acceleration of Large Language Models. In: Passban P, Way A, Rezagholizadeh M, eds. Enhancing LLM Performance. Efficacy, Fine-Tuning, and Inference Techniques. Springer Nature; 2025:83-97. doi:10.1007/978-3-031-85747-8_6
[Preprint] View | DOI | Download Preprint (ext.) | arXiv
 
[7]
2024 | Published | Conference Paper | IST-REx-ID: 18975 | OA
Modoranu I-V, Kalinov A, Kurtic E, Frantar E, Alistarh D-A. Error feedback can accurately compress preconditioners. In: 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:35910-35933.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[6]
2024 | Published | Conference Paper | IST-REx-ID: 19510 | OA
Modoranu I-V, Safaryan M, Malinovsky G, et al. MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
[5]
2024 | Published | Conference Paper | IST-REx-ID: 15011 | OA
Kurtic E, Hoefler T, Alistarh D-A. How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In: Proceedings of Machine Learning Research. Vol 234. ML Research Press; 2024:542-553.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[4]
2023 | Published | Conference Paper | IST-REx-ID: 14460 | OA
Nikdan M, Pegolotti T, Iofinova EB, Kurtic E, Alistarh D-A. SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:26215-26227.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[3]
2023 | Published | Conference Paper | IST-REx-ID: 13053 | OA
Krumes A, Vladu A, Kurtic E, Lampert C, Alistarh D-A. CrAM: A Compression-Aware Minimizer. In: 11th International Conference on Learning Representations . OpenReview; 2023.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[2]
2022 | Published | Conference Paper | IST-REx-ID: 17088 | OA
Kurtic E, Campos D, Nguyen T, et al. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics; 2022:4163-4181. doi:10.18653/v1/2022.emnlp-main.279
[Published Version] View | Files available | DOI | arXiv
 
[1]
2021 | Published | Conference Paper | IST-REx-ID: 11463 | OA
Frantar E, Kurtic E, Alistarh D-A. M-FAC: Efficient matrix-free approximations of second-order information. In: 35th Conference on Neural Information Processing Systems. Vol 34. Neural Information Processing Systems Foundation; 2021:14873-14886.
[Published Version] View | Download Published Version (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: AMA

Export / Embed