5 Publications

Mark all

[5]
2023 | Conference Paper | IST-REx-ID: 14461 | OA
I. Markov, A. Vladu, Q. Guo, and D.-A. Alistarh, “Quantized distributed training of large models with convergence guarantees,” in Proceedings of the 40th International Conference on Machine Learning, Honolulu, Hawaii, HI, United States, 2023, vol. 202, pp. 24020–24044.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[4]
2022 | Conference Paper | IST-REx-ID: 12780 | OA
I. Markov, H. Ramezanikebrya, and D.-A. Alistarh, “CGX: Adaptive system support for communication-efficient deep learning,” in Proceedings of the 23rd ACM/IFIP International Middleware Conference, Quebec, QC, Canada, 2022, pp. 241–254.
[Published Version] View | Files available | DOI | arXiv
 
[3]
2021 | Conference Paper | IST-REx-ID: 10432 | OA
G. Nadiradze, I. Markov, B. Chatterjee, V. Kungurtsev, and D.-A. Alistarh, “Elastic consistency: A practical consistency model for distributed stochastic gradient descent,” in Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2021, vol. 35, no. 10, pp. 9037–9045.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[2]
2021 | Conference Paper | IST-REx-ID: 10049 | OA
K. Klein et al., “Keep the dirt: tainted TreeKEM, adaptively and actively secure continuous group key agreement,” in 2021 IEEE Symposium on Security and Privacy , San Francisco, CA, United States, 2021, pp. 268–284.
[Preprint] View | Files available | DOI | Download Preprint (ext.)
 
[1]
2020 | Conference Paper | IST-REx-ID: 15086 | OA
F. Faghri, I. Tabrizian, I. Markov, D.-A. Alistarh, D. Roy, and A. Ramezani-Kebrya, “Adaptive gradient quantization for data-parallel SGD,” in Advances in Neural Information Processing Systems, Vancouver, Canada, 2020, vol. 33.
[Preprint] View | Download Preprint (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: IEEE

Export / Embed

5 Publications

Mark all

[5]
2023 | Conference Paper | IST-REx-ID: 14461 | OA
I. Markov, A. Vladu, Q. Guo, and D.-A. Alistarh, “Quantized distributed training of large models with convergence guarantees,” in Proceedings of the 40th International Conference on Machine Learning, Honolulu, Hawaii, HI, United States, 2023, vol. 202, pp. 24020–24044.
[Preprint] View | Download Preprint (ext.) | arXiv
 
[4]
2022 | Conference Paper | IST-REx-ID: 12780 | OA
I. Markov, H. Ramezanikebrya, and D.-A. Alistarh, “CGX: Adaptive system support for communication-efficient deep learning,” in Proceedings of the 23rd ACM/IFIP International Middleware Conference, Quebec, QC, Canada, 2022, pp. 241–254.
[Published Version] View | Files available | DOI | arXiv
 
[3]
2021 | Conference Paper | IST-REx-ID: 10432 | OA
G. Nadiradze, I. Markov, B. Chatterjee, V. Kungurtsev, and D.-A. Alistarh, “Elastic consistency: A practical consistency model for distributed stochastic gradient descent,” in Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2021, vol. 35, no. 10, pp. 9037–9045.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[2]
2021 | Conference Paper | IST-REx-ID: 10049 | OA
K. Klein et al., “Keep the dirt: tainted TreeKEM, adaptively and actively secure continuous group key agreement,” in 2021 IEEE Symposium on Security and Privacy , San Francisco, CA, United States, 2021, pp. 268–284.
[Preprint] View | Files available | DOI | Download Preprint (ext.)
 
[1]
2020 | Conference Paper | IST-REx-ID: 15086 | OA
F. Faghri, I. Tabrizian, I. Markov, D.-A. Alistarh, D. Roy, and A. Ramezani-Kebrya, “Adaptive gradient quantization for data-parallel SGD,” in Advances in Neural Information Processing Systems, Vancouver, Canada, 2020, vol. 33.
[Preprint] View | Download Preprint (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: IEEE

Export / Embed