Please note that ISTA Research Explorer no longer supports Internet Explorer versions 8 or 9 (or earlier).
We recommend upgrading to the latest Internet Explorer, Google Chrome, or Firefox.
121 Publications
2023 | Journal Article | IST-REx-ID: 12566 |
Alistarh D-A, Ellen F, Rybicki J. Wait-free approximate agreement on graphs. Theoretical Computer Science. 2023;948(2). doi:10.1016/j.tcs.2023.113733
[Published Version]
View
| Files available
| DOI
| WoS
2023 | Thesis | IST-REx-ID: 13074 |
Peste E-A. Efficiency and generalization of sparse neural networks. 2023. doi:10.15479/at:ista:13074
[Published Version]
View
| Files available
| DOI
2023 | Journal Article | IST-REx-ID: 12330 |
Aksenov V, Alistarh D-A, Drozdova A, Mohtashami A. The splay-list: A distribution-adaptive concurrent skip-list. Distributed Computing. 2023;36:395-418. doi:10.1007/s00446-022-00441-x
[Preprint]
View
| DOI
| Download Preprint (ext.)
| WoS
| arXiv
2023 | Conference Paper | IST-REx-ID: 14461 |
Markov I, Vladu A, Guo Q, Alistarh D-A. Quantized distributed training of large models with convergence guarantees. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:24020-24044.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2023 | Conference Paper | IST-REx-ID: 14459 |
Shevchenko A, Kögler K, Hassani H, Mondelli M. Fundamental limits of two-layer autoencoders, and achieving them with gradient methods. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:31151-31209.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2023 | Conference Paper | IST-REx-ID: 14460 |
Nikdan M, Pegolotti T, Iofinova EB, Kurtic E, Alistarh D-A. SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:26215-26227.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2023 | Conference Paper | IST-REx-ID: 14458 |
Frantar E, Alistarh D-A. SparseGPT: Massive language models can be accurately pruned in one-shot. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:10323-10337.
[Preprint]
View
| Download Preprint (ext.)
| arXiv
2023 | Journal Article | IST-REx-ID: 14364 |
Alistarh D-A, Aspnes J, Ellen F, Gelashvili R, Zhu L. Why extension-based proofs fail. SIAM Journal on Computing. 2023;52(4):913-944. doi:10.1137/20M1375851
[Preprint]
View
| Files available
| DOI
| Download Preprint (ext.)
| WoS
| arXiv
2023 | Conference Paper | IST-REx-ID: 14771 |
Iofinova EB, Peste E-A, Alistarh D-A. Bias in pruned vision models: In-depth analysis and countermeasures. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE; 2023:24364-24373. doi:10.1109/cvpr52729.2023.02334
[Preprint]
View
| Files available
| DOI
| Download Preprint (ext.)
| WoS
| arXiv
2023 | Journal Article | IST-REx-ID: 14815 |
Beznosikov A, Horvath S, Richtarik P, Safaryan M. On biased compression for distributed learning. Journal of Machine Learning Research. 2023;24:1-50.
[Published Version]
View
| Files available
| WoS
| arXiv