Please note that LibreCat no longer supports Internet Explorer versions 8 or 9 (or earlier).

We recommend upgrading to the latest Internet Explorer, Google Chrome, or Firefox.




151 Publications

2025 | Published | Journal Article | IST-REx-ID: 19713 | OA
Talaei S, Ansaripour M, Nadiradze G, Alistarh D-A. Hybrid decentralized optimization: Leveraging both first- and zeroth-order optimizers for faster convergence. Proceedings of the39th AAAI Conference on Artificial Intelligence. 2025;39(19):20778-20786. doi:10.1609/aaai.v39i19.34290
[Preprint] View | Files available | DOI | Download Preprint (ext.) | arXiv
 
2025 | Published | Conference Paper | IST-REx-ID: 19877 | OA
Frantar E, Castro RL, Chen J, Hoefler T, Alistarh D-A. MARLIN: Mixed-precision auto-regressive parallel inference on Large Language Models. In: Proceedings of the 30th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming. Association for Computing Machinery; 2025:239-251. doi:10.1145/3710848.3710871
[Published Version] View | Files available | DOI | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 17093 | OA
Zakerinia H, Talaei S, Nadiradze G, Alistarh D-A. Communication-efficient federated learning with data and client heterogeneity. In: Proceedings of the 27th International Conference on Artificial Intelligence and Statistics. Vol 238. ML Research Press; 2024:3448-3456.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 17332 | OA
Kokorin I, Yudov V, Aksenov V, Alistarh D-A. Wait-free trees with asymptotically-efficient range queries. In: 2024 IEEE International Parallel and Distributed Processing Symposium. IEEE; 2024:169-179. doi:10.1109/IPDPS57955.2024.00023
[Preprint] View | DOI | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 15011 | OA
Kurtic E, Hoefler T, Alistarh D-A. How to prune your language model: Recovering accuracy on the “Sparsity May Cry” benchmark. In: Proceedings of Machine Learning Research. Vol 234. ML Research Press; 2024:542-553.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 18070
Chatterjee B, Kungurtsev V, Alistarh D-A. Federated SGD with local asynchrony. In: Proceedings of the 44th International Conference on Distributed Computing Systems. IEEE; 2024:857-868. doi:10.1109/ICDCS60910.2024.00084
View | DOI
 
2024 | Published | Conference Paper | IST-REx-ID: 18113 | OA
Egiazarian V, Panferov A, Kuznedelev D, Frantar E, Babenko A, Alistarh D-A. Extreme compression of large language models via additive quantization. In: Proceedings of the 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:12284-12303.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 18117 | OA
Nikdan M, Tabesh S, Crncevic E, Alistarh D-A. RoSA: Accurate parameter-efficient fine-tuning via robust adaptation. In: Proceedings of the 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:38187-38206.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 18975 | OA
Modoranu I-V, Kalinov A, Kurtic E, Frantar E, Alistarh D-A. Error feedback can accurately compress preconditioners. In: 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:35910-35933.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 18977 | OA
Dettmers T, Svirschevski RA, Egiazarian V, et al. SpQR: A sparse-quantized representation for near-lossless LLM weight compression. In: 12th International Conference on Learning Representations. OpenReview; 2024.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2024 | Published | Thesis | IST-REx-ID: 17485 | OA
Frantar E. Compressing large neural networks : Algorithms, systems and scaling laws. 2024. doi:10.15479/at:ista:17485
[Published Version] View | Files available | DOI
 
2024 | Published | Conference Paper | IST-REx-ID: 18061 | OA
Frantar E, Alistarh D-A. QMoE: Sub-1-bit compression of trillion parameter models. In: Gibbons P, Pekhimenko G, De Sa C, eds. Proceedings of Machine Learning and Systems. Vol 6. ; 2024.
[Published Version] View | Files available | Download Published Version (ext.)
 
2024 | Published | Conference Paper | IST-REx-ID: 18062 | OA
Frantar E, Ruiz CR, Houlsby N, Alistarh D-A, Evci U. Scaling laws for sparsely-connected foundation models. In: The Twelfth International Conference on Learning Representations. ; 2024.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 17329 | OA
Alistarh D-A, Chatterjee K, Karrabi M, Lazarsfeld JM. Game dynamics and equilibrium computation in the population protocol model. In: Proceedings of the 43rd Annual ACM Symposium on Principles of Distributed Computing. Association for Computing Machinery; 2024:40-49. doi:10.1145/3662158.3662768
[Published Version] View | Files available | DOI
 
2024 | Published | Conference Paper | IST-REx-ID: 18976 | OA
Islamov R, Safaryan M, Alistarh D-A. AsGrad: A sharp unified analysis of asynchronous-SGD algorithms. In: Proceedings of The 27th International Conference on Artificial Intelligence and Statistics. Vol 238. ML Research Press; 2024:649-657.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 18121 | OA
Moakhar AS, Iofinova EB, Frantar E, Alistarh D-A. SPADE: Sparsity-guided debugging for deep neural networks. In: Proceedings of the 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:45955-45987.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
2024 | Published | Thesis | IST-REx-ID: 17490 | OA
Markov I. Communication-efficient distributed training of deep neural networks : An algorithms and systems perspective. 2024. doi:10.15479/at:ista:17490
[Published Version] View | Files available | DOI
 
2024 | Published | Conference Paper | IST-REx-ID: 17456 | OA
Markov I, Alimohammadi K, Frantar E, Alistarh D-A. L-GreCo: Layerwise-adaptive gradient compression for efficient data-parallel deep learning. In: Gibbons P, Pekhimenko G, De Sa C, eds. Proceedings of Machine Learning and Systems . Vol 6. Association for Computing Machinery; 2024.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 19518 | OA
Wu D, Modoranu I-V, Safaryan M, Kuznedelev D, Alistarh D-A. The iterative optimal brain surgeon: Faster sparse recovery by leveraging second-order information. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 19510 | OA
Modoranu I-V, Safaryan M, Malinovsky G, et al. MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: AMA

Export / Embed