Please note that LibreCat no longer supports Internet Explorer versions 8 or 9 (or earlier).

We recommend upgrading to the latest Internet Explorer, Google Chrome, or Firefox.




158 Publications

2024 | Published | Conference Paper | IST-REx-ID: 17456 | OA
Markov I, Alimohammadi K, Frantar E, Alistarh D-A. L-GreCo: Layerwise-adaptive gradient compression for efficient data-parallel deep learning. In: Gibbons P, Pekhimenko G, De Sa C, eds. Proceedings of Machine Learning and Systems . Vol 6. Association for Computing Machinery; 2024.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 19518 | OA
Wu D, Modoranu I-V, Safaryan M, Kuznedelev D, Alistarh D-A. The iterative optimal brain surgeon: Faster sparse recovery by leveraging second-order information. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 19510 | OA
Modoranu I-V, Safaryan M, Malinovsky G, et al. MICROADAM: Accurate adaptive optimization with low space overhead and provable convergence. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 19511 | OA
Ashkboos S, Mohtashami A, Croci ML, et al. QuaRot: Outlier-free 4-bit inference in rotated LLMs. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 19519 | OA
Malinovskii V, Mazur D, Ilin I, et al. PV-tuning: Beyond straight-through estimation for extreme LLM compression. In: 38th Conference on Neural Information Processing Systems. Vol 37. Neural Information Processing Systems Foundation; 2024.
[Published Version] View | Files available | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 17332 | OA
Kokorin I, Yudov V, Aksenov V, Alistarh D-A. Wait-free trees with asymptotically-efficient range queries. In: 2024 IEEE International Parallel and Distributed Processing Symposium. IEEE; 2024:169-179. doi:10.1109/IPDPS57955.2024.00023
[Preprint] View | DOI | Download Preprint (ext.) | WoS | arXiv
 
2024 | Published | Conference Paper | IST-REx-ID: 18070
Chatterjee B, Kungurtsev V, Alistarh D-A. Federated SGD with local asynchrony. In: Proceedings of the 44th International Conference on Distributed Computing Systems. IEEE; 2024:857-868. doi:10.1109/ICDCS60910.2024.00084
View | DOI | WoS
 
2024 | Published | Thesis | IST-REx-ID: 17490 | OA
Markov I. Communication-efficient distributed training of deep neural networks : An algorithms and systems perspective. 2024. doi:10.15479/at:ista:17490
[Published Version] View | Files available | DOI
 
2024 | Research Data Reference | IST-REx-ID: 19884 | OA
Frantar E, Castro R, Chen J, Hoefler T, Alistarh D-A. MARLIN: Mixed-precision auto-regressive parallel inference on Large Language Models. 2024. doi:10.5281/ZENODO.14213091
[Published Version] View | Files available | DOI | Download Published Version (ext.)
 
2024 | Published | Thesis | IST-REx-ID: 17465 | OA
Shevchenko A. High-dimensional limits in artificial neural networks. 2024. doi:10.15479/at:ista:17465
[Published Version] View | Files available | DOI
 
2024 | Published | Conference Paper | IST-REx-ID: 17469 | OA
Kögler K, Shevchenko A, Hassani H, Mondelli M. Compression of structured data with autoencoders: Provable benefit of nonlinearities and depth. In: Proceedings of the 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:24964-25015.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
2023 | Published | Journal Article | IST-REx-ID: 13179 | OA
Koval N, Khalanskiy D, Alistarh D-A. CQS: A formally-verified framework for fair and abortable synchronization. Proceedings of the ACM on Programming Languages. 2023;7. doi:10.1145/3591230
[Published Version] View | Files available | DOI
 
2023 | Published | Journal Article | IST-REx-ID: 12330 | OA
Aksenov V, Alistarh D-A, Drozdova A, Mohtashami A. The splay-list: A distribution-adaptive concurrent skip-list. Distributed Computing. 2023;36:395-418. doi:10.1007/s00446-022-00441-x
[Preprint] View | DOI | Download Preprint (ext.) | WoS | arXiv
 
2023 | Published | Conference Paper | IST-REx-ID: 12735 | OA
Koval N, Alistarh D-A, Elizarov R. Fast and scalable channels in Kotlin Coroutines. In: Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. Association for Computing Machinery; 2023:107-118. doi:10.1145/3572848.3577481
[Preprint] View | DOI | Download Preprint (ext.) | arXiv
 
2023 | Published | Conference Poster | IST-REx-ID: 12736 | OA
Aksenov V, Brown TA, Fedorov A, Kokorin I. Unexpected Scaling in Path Copying Trees. Association for Computing Machinery; 2023:438-440. doi:10.1145/3572848.3577512
[Published Version] View | DOI | Download Published Version (ext.)
 
2023 | Published | Journal Article | IST-REx-ID: 14815 | OA
Beznosikov A, Horvath S, Richtarik P, Safaryan M. On biased compression for distributed learning. Journal of Machine Learning Research. 2023;24:1-50.
[Published Version] View | Files available | WoS | arXiv
 
2023 | Published | Conference Paper | IST-REx-ID: 14460 | OA
Nikdan M, Pegolotti T, Iofinova EB, Kurtic E, Alistarh D-A. SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:26215-26227.
[Preprint] View | Download Preprint (ext.) | arXiv
 
2023 | Published | Conference Paper | IST-REx-ID: 17378 | OA
Frantar E, Ashkboos S, Hoefler T, Alistarh D-A. OPTQ: Accurate post-training quantization for generative pre-trained transformers. In: 11th International Conference on Learning Representations . International Conference on Learning Representations; 2023.
[Published Version] View | Files available
 
2023 | Published | Conference Paper | IST-REx-ID: 14458 | OA
Frantar E, Alistarh D-A. SparseGPT: Massive language models can be accurately pruned in one-shot. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:10323-10337.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
2023 | Published | Journal Article | IST-REx-ID: 12566 | OA
Alistarh D-A, Ellen F, Rybicki J. Wait-free approximate agreement on graphs. Theoretical Computer Science. 2023;948(2). doi:10.1016/j.tcs.2023.113733
[Published Version] View | Files available | DOI | WoS
 

Search

Filter Publications

Display / Sort

Citation Style: AMA

Export / Embed