7 Publications

Mark all

[7]
2024 | Published | Conference Paper | IST-REx-ID: 15011 | OA
Kurtic E, Hoefler T, Alistarh D-A. How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In: Proceedings of Machine Learning Research. Vol 234. ML Research Press; 2024:542-553.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[6]
2024 | Published | Conference Paper | IST-REx-ID: 18975 | OA
Modoranu I-V, Kalinov A, Kurtic E, Frantar E, Alistarh D-A. Error feedback can accurately compress preconditioners. In: 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:35910-35933.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[5]
2024 | Published | Conference Paper | IST-REx-ID: 19510 | OA
Modoranu I-V, Safaryan M, Malinovsky G, et al. MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
[4]
2023 | Published | Conference Paper | IST-REx-ID: 14460 | OA
Nikdan M, Pegolotti T, Iofinova EB, Kurtic E, Alistarh D-A. SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:26215-26227.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[3]
2023 | Published | Conference Paper | IST-REx-ID: 13053 | OA
Krumes A, Vladu A, Kurtic E, Lampert C, Alistarh D-A. CrAM: A Compression-Aware Minimizer. In: 11th International Conference on Learning Representations . OpenReview; 2023.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[2]
2022 | Published | Conference Paper | IST-REx-ID: 17088 | OA
Kurtic E, Campos D, Nguyen T, et al. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics; 2022:4163-4181. doi:10.18653/v1/2022.emnlp-main.279
[Published Version] View | Files available | DOI | arXiv
 
[1]
2021 | Published | Conference Paper | IST-REx-ID: 11463 | OA
Frantar E, Kurtic E, Alistarh D-A. M-FAC: Efficient matrix-free approximations of second-order information. In: 35th Conference on Neural Information Processing Systems. Vol 34. Neural Information Processing Systems Foundation; 2021:14873-14886.
[Published Version] View | Download Published Version (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: AMA

Export / Embed

Grants


7 Publications

Mark all

[7]
2024 | Published | Conference Paper | IST-REx-ID: 15011 | OA
Kurtic E, Hoefler T, Alistarh D-A. How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In: Proceedings of Machine Learning Research. Vol 234. ML Research Press; 2024:542-553.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[6]
2024 | Published | Conference Paper | IST-REx-ID: 18975 | OA
Modoranu I-V, Kalinov A, Kurtic E, Frantar E, Alistarh D-A. Error feedback can accurately compress preconditioners. In: 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:35910-35933.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[5]
2024 | Published | Conference Paper | IST-REx-ID: 19510 | OA
Modoranu I-V, Safaryan M, Malinovsky G, et al. MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
[4]
2023 | Published | Conference Paper | IST-REx-ID: 14460 | OA
Nikdan M, Pegolotti T, Iofinova EB, Kurtic E, Alistarh D-A. SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:26215-26227.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[3]
2023 | Published | Conference Paper | IST-REx-ID: 13053 | OA
Krumes A, Vladu A, Kurtic E, Lampert C, Alistarh D-A. CrAM: A Compression-Aware Minimizer. In: 11th International Conference on Learning Representations . OpenReview; 2023.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[2]
2022 | Published | Conference Paper | IST-REx-ID: 17088 | OA
Kurtic E, Campos D, Nguyen T, et al. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics; 2022:4163-4181. doi:10.18653/v1/2022.emnlp-main.279
[Published Version] View | Files available | DOI | arXiv
 
[1]
2021 | Published | Conference Paper | IST-REx-ID: 11463 | OA
Frantar E, Kurtic E, Alistarh D-A. M-FAC: Efficient matrix-free approximations of second-order information. In: 35th Conference on Neural Information Processing Systems. Vol 34. Neural Information Processing Systems Foundation; 2021:14873-14886.
[Published Version] View | Download Published Version (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: AMA

Export / Embed