Please note that ISTA Research Explorer no longer supports Internet Explorer versions 8 or 9 (or earlier).

We recommend upgrading to the latest Internet Explorer, Google Chrome, or Firefox.

121 Publications


2023 | Journal Article | IST-REx-ID: 12566 | OA
Wait-free approximate agreement on graphs
D.-A. Alistarh, F. Ellen, J. Rybicki, Theoretical Computer Science 948 (2023).
[Published Version] View | Files available | DOI | WoS
 

2023 | Thesis | IST-REx-ID: 13074 | OA
Efficiency and generalization of sparse neural networks
E.-A. Peste, Efficiency and Generalization of Sparse Neural Networks, Institute of Science and Technology Austria, 2023.
[Published Version] View | Files available | DOI
 

2023 | Journal Article | IST-REx-ID: 12330 | OA
The splay-list: A distribution-adaptive concurrent skip-list
V. Aksenov, D.-A. Alistarh, A. Drozdova, A. Mohtashami, Distributed Computing 36 (2023) 395–418.
[Preprint] View | DOI | Download Preprint (ext.) | WoS | arXiv
 

2023 | Conference Paper | IST-REx-ID: 14461 | OA
Quantized distributed training of large models with convergence guarantees
I. Markov, A. Vladu, Q. Guo, D.-A. Alistarh, in:, Proceedings of the 40th International Conference on Machine Learning, ML Research Press, 2023, pp. 24020–24044.
[Preprint] View | Download Preprint (ext.) | arXiv
 

2023 | Conference Paper | IST-REx-ID: 14459 | OA
Fundamental limits of two-layer autoencoders, and achieving them with gradient methods
A. Shevchenko, K. Kögler, H. Hassani, M. Mondelli, in:, Proceedings of the 40th International Conference on Machine Learning, ML Research Press, 2023, pp. 31151–31209.
[Preprint] View | Download Preprint (ext.) | arXiv
 

2023 | Conference Paper | IST-REx-ID: 14460 | OA
SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge
M. Nikdan, T. Pegolotti, E.B. Iofinova, E. Kurtic, D.-A. Alistarh, in:, Proceedings of the 40th International Conference on Machine Learning, ML Research Press, 2023, pp. 26215–26227.
[Preprint] View | Download Preprint (ext.) | arXiv
 

2023 | Conference Paper | IST-REx-ID: 14458 | OA
SparseGPT: Massive language models can be accurately pruned in one-shot
E. Frantar, D.-A. Alistarh, in:, Proceedings of the 40th International Conference on Machine Learning, ML Research Press, 2023, pp. 10323–10337.
[Preprint] View | Download Preprint (ext.) | arXiv
 

2023 | Journal Article | IST-REx-ID: 14364 | OA
Why extension-based proofs fail
D.-A. Alistarh, J. Aspnes, F. Ellen, R. Gelashvili, L. Zhu, SIAM Journal on Computing 52 (2023) 913–944.
[Preprint] View | Files available | DOI | Download Preprint (ext.) | WoS | arXiv
 

2023 | Conference Paper | IST-REx-ID: 14771 | OA
Bias in pruned vision models: In-depth analysis and countermeasures
E.B. Iofinova, E.-A. Peste, D.-A. Alistarh, in:, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2023, pp. 24364–24373.
[Preprint] View | Files available | DOI | Download Preprint (ext.) | WoS | arXiv
 

2023 | Journal Article | IST-REx-ID: 14815 | OA
On biased compression for distributed learning
A. Beznosikov, S. Horvath, P. Richtarik, M. Safaryan, Journal of Machine Learning Research 24 (2023) 1–50.
[Published Version] View | Files available | WoS | arXiv
 

Filters and Search Terms

department=DaAl

Search

Filter Publications