Eldar Kurtic
7 Publications
2024 | Published | Conference Paper | IST-REx-ID: 15011 |

Kurtic, E., Hoefler, T., & Alistarh, D.-A. (2024). How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In Proceedings of Machine Learning Research (Vol. 234, pp. 542–553). Hongkong, China: ML Research Press.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2024 | Published | Conference Paper | IST-REx-ID: 18975 |

Modoranu, I.-V., Kalinov, A., Kurtic, E., Frantar, E., & Alistarh, D.-A. (2024). Error feedback can accurately compress preconditioners. In 41st International Conference on Machine Learning (Vol. 235, pp. 35910–35933). Vienna, Austria: ML Research Press.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2024 | Published | Conference Paper | IST-REx-ID: 19510 |

Modoranu, I.-V., Safaryan, M., Malinovsky, G., Kurtic, E., Robert, T., Richtárik, P., & Alistarh, D.-A. (2024). MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In 38th Conference on Neural Information Processing Systems (Vol. 37). Neural Information Processing Systems Foundation.
[Preprint]
View
| Files available
| Download Preprint (ext.)
| arXiv
2023 | Published | Conference Paper | IST-REx-ID: 14460 |

Nikdan, M., Pegolotti, T., Iofinova, E. B., Kurtic, E., & Alistarh, D.-A. (2023). SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In Proceedings of the 40th International Conference on Machine Learning (Vol. 202, pp. 26215–26227). Honolulu, Hawaii, HI, United States: ML Research Press.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2023 | Published | Conference Paper | IST-REx-ID: 13053 |

Krumes, A., Vladu, A., Kurtic, E., Lampert, C., & Alistarh, D.-A. (2023). CrAM: A Compression-Aware Minimizer. In 11th International Conference on Learning Representations . Kigali, Rwanda : OpenReview.
[Published Version]
View
| Files available
| Download Published Version (ext.)
| arXiv
2022 | Published | Conference Paper | IST-REx-ID: 17088 |

Kurtic, E., Campos, D., Nguyen, T., Frantar, E., Kurtz, M., Fineran, B., … Alistarh, D.-A. (2022). The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 4163–4181). Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.emnlp-main.279
[Published Version]
View
| Files available
| DOI
| arXiv
2021 | Published | Conference Paper | IST-REx-ID: 11463 |

Frantar, E., Kurtic, E., & Alistarh, D.-A. (2021). M-FAC: Efficient matrix-free approximations of second-order information. In 35th Conference on Neural Information Processing Systems (Vol. 34, pp. 14873–14886). Virtual, Online: Neural Information Processing Systems Foundation.
[Published Version]
View
| Download Published Version (ext.)
| arXiv
Grants
7 Publications
2024 | Published | Conference Paper | IST-REx-ID: 15011 |

Kurtic, E., Hoefler, T., & Alistarh, D.-A. (2024). How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In Proceedings of Machine Learning Research (Vol. 234, pp. 542–553). Hongkong, China: ML Research Press.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2024 | Published | Conference Paper | IST-REx-ID: 18975 |

Modoranu, I.-V., Kalinov, A., Kurtic, E., Frantar, E., & Alistarh, D.-A. (2024). Error feedback can accurately compress preconditioners. In 41st International Conference on Machine Learning (Vol. 235, pp. 35910–35933). Vienna, Austria: ML Research Press.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2024 | Published | Conference Paper | IST-REx-ID: 19510 |

Modoranu, I.-V., Safaryan, M., Malinovsky, G., Kurtic, E., Robert, T., Richtárik, P., & Alistarh, D.-A. (2024). MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In 38th Conference on Neural Information Processing Systems (Vol. 37). Neural Information Processing Systems Foundation.
[Preprint]
View
| Files available
| Download Preprint (ext.)
| arXiv
2023 | Published | Conference Paper | IST-REx-ID: 14460 |

Nikdan, M., Pegolotti, T., Iofinova, E. B., Kurtic, E., & Alistarh, D.-A. (2023). SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In Proceedings of the 40th International Conference on Machine Learning (Vol. 202, pp. 26215–26227). Honolulu, Hawaii, HI, United States: ML Research Press.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2023 | Published | Conference Paper | IST-REx-ID: 13053 |

Krumes, A., Vladu, A., Kurtic, E., Lampert, C., & Alistarh, D.-A. (2023). CrAM: A Compression-Aware Minimizer. In 11th International Conference on Learning Representations . Kigali, Rwanda : OpenReview.
[Published Version]
View
| Files available
| Download Published Version (ext.)
| arXiv
2022 | Published | Conference Paper | IST-REx-ID: 17088 |

Kurtic, E., Campos, D., Nguyen, T., Frantar, E., Kurtz, M., Fineran, B., … Alistarh, D.-A. (2022). The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 4163–4181). Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.emnlp-main.279
[Published Version]
View
| Files available
| DOI
| arXiv
2021 | Published | Conference Paper | IST-REx-ID: 11463 |

Frantar, E., Kurtic, E., & Alistarh, D.-A. (2021). M-FAC: Efficient matrix-free approximations of second-order information. In 35th Conference on Neural Information Processing Systems (Vol. 34, pp. 14873–14886). Virtual, Online: Neural Information Processing Systems Foundation.
[Published Version]
View
| Download Published Version (ext.)
| arXiv