Eldar Kurtic
7 Publications
2024 | Published | Conference Paper | IST-REx-ID: 15011 |

Kurtic E, Hoefler T, Alistarh D-A. How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In: Proceedings of Machine Learning Research. Vol 234. ML Research Press; 2024:542-553.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2024 | Published | Conference Paper | IST-REx-ID: 18975 |

Modoranu I-V, Kalinov A, Kurtic E, Frantar E, Alistarh D-A. Error feedback can accurately compress preconditioners. In: 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:35910-35933.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2024 | Published | Conference Paper | IST-REx-ID: 19510 |

Modoranu I-V, Safaryan M, Malinovsky G, et al. MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint]
View
| Files available
| Download Preprint (ext.)
| arXiv
2023 | Published | Conference Paper | IST-REx-ID: 14460 |

Nikdan M, Pegolotti T, Iofinova EB, Kurtic E, Alistarh D-A. SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:26215-26227.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2023 | Published | Conference Paper | IST-REx-ID: 13053 |

Krumes A, Vladu A, Kurtic E, Lampert C, Alistarh D-A. CrAM: A Compression-Aware Minimizer. In: 11th International Conference on Learning Representations . OpenReview; 2023.
[Published Version]
View
| Files available
| Download Published Version (ext.)
| arXiv
2022 | Published | Conference Paper | IST-REx-ID: 17088 |

Kurtic E, Campos D, Nguyen T, et al. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics; 2022:4163-4181. doi:10.18653/v1/2022.emnlp-main.279
[Published Version]
View
| Files available
| DOI
| arXiv
2021 | Published | Conference Paper | IST-REx-ID: 11463 |

Frantar E, Kurtic E, Alistarh D-A. M-FAC: Efficient matrix-free approximations of second-order information. In: 35th Conference on Neural Information Processing Systems. Vol 34. Neural Information Processing Systems Foundation; 2021:14873-14886.
[Published Version]
View
| Download Published Version (ext.)
| arXiv
Grants
7 Publications
2024 | Published | Conference Paper | IST-REx-ID: 15011 |

Kurtic E, Hoefler T, Alistarh D-A. How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In: Proceedings of Machine Learning Research. Vol 234. ML Research Press; 2024:542-553.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2024 | Published | Conference Paper | IST-REx-ID: 18975 |

Modoranu I-V, Kalinov A, Kurtic E, Frantar E, Alistarh D-A. Error feedback can accurately compress preconditioners. In: 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:35910-35933.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2024 | Published | Conference Paper | IST-REx-ID: 19510 |

Modoranu I-V, Safaryan M, Malinovsky G, et al. MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint]
View
| Files available
| Download Preprint (ext.)
| arXiv
2023 | Published | Conference Paper | IST-REx-ID: 14460 |

Nikdan M, Pegolotti T, Iofinova EB, Kurtic E, Alistarh D-A. SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:26215-26227.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2023 | Published | Conference Paper | IST-REx-ID: 13053 |

Krumes A, Vladu A, Kurtic E, Lampert C, Alistarh D-A. CrAM: A Compression-Aware Minimizer. In: 11th International Conference on Learning Representations . OpenReview; 2023.
[Published Version]
View
| Files available
| Download Published Version (ext.)
| arXiv
2022 | Published | Conference Paper | IST-REx-ID: 17088 |

Kurtic E, Campos D, Nguyen T, et al. The optimal BERT surgeon: Scalable and accurate second-order pruning for large language models. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics; 2022:4163-4181. doi:10.18653/v1/2022.emnlp-main.279
[Published Version]
View
| Files available
| DOI
| arXiv
2021 | Published | Conference Paper | IST-REx-ID: 11463 |

Frantar E, Kurtic E, Alistarh D-A. M-FAC: Efficient matrix-free approximations of second-order information. In: 35th Conference on Neural Information Processing Systems. Vol 34. Neural Information Processing Systems Foundation; 2021:14873-14886.
[Published Version]
View
| Download Published Version (ext.)
| arXiv