7 Publications

Mark all

[7]
2024 | Published | Thesis | IST-REx-ID: 17490 | OA
Markov, Ilia. “Communication-Efficient Distributed Training of Deep Neural Networks : An Algorithms and Systems Perspective.” Institute of Science and Technology Austria, 2024. https://doi.org/10.15479/at:ista:17490.
[Published Version] View | Files available | DOI
 
[6]
2024 | Published | Conference Paper | IST-REx-ID: 17456 | OA
Markov, Ilia, Kaveh Alimohammadi, Elias Frantar, and Dan-Adrian Alistarh. “L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient Data-Parallel Deep Learning.” In Proceedings of Machine Learning and Systems , edited by P. Gibbons, G. Pekhimenko, and C. De Sa, Vol. 6. Association for Computing Machinery, 2024.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[5]
2023 | Published | Conference Paper | IST-REx-ID: 14461 | OA
Markov, Ilia, Adrian Vladu, Qi Guo, and Dan-Adrian Alistarh. “Quantized Distributed Training of Large Models with Convergence Guarantees.” In Proceedings of the 40th International Conference on Machine Learning, 202:24020–44. ML Research Press, 2023.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
[4]
2022 | Published | Conference Paper | IST-REx-ID: 12780 | OA
Markov, Ilia, Hamidreza Ramezanikebrya, and Dan-Adrian Alistarh. “CGX: Adaptive System Support for Communication-Efficient Deep Learning.” In Proceedings of the 23rd ACM/IFIP International Middleware Conference, 241–54. Association for Computing Machinery, 2022. https://doi.org/10.1145/3528535.3565248.
[Published Version] View | Files available | DOI | arXiv
 
[3]
2021 | Published | Conference Paper | IST-REx-ID: 10432 | OA
Nadiradze, Giorgi, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, and Dan-Adrian Alistarh. “Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent.” In Proceedings of the AAAI Conference on Artificial Intelligence, 35:9037–45, 2021.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[2]
2021 | Published | Conference Paper | IST-REx-ID: 10049 | OA
Klein, Karen, Guillermo Pascual Perez, Michael Walter, Chethan Kamath Hosdurg, Margarita Capretto, Miguel Cueto Noval, Ilia Markov, Michelle X Yeo, Joel F Alwen, and Krzysztof Z Pietrzak. “Keep the Dirt: Tainted TreeKEM, Adaptively and Actively Secure Continuous Group Key Agreement.” In 2021 IEEE Symposium on Security and Privacy , 268–84. IEEE, 2021. https://doi.org/10.1109/sp40001.2021.00035.
[Preprint] View | Files available | DOI | Download Preprint (ext.)
 
[1]
2020 | Published | Conference Paper | IST-REx-ID: 15086 | OA
Faghri, Fartash , Iman Tabrizian, Ilia Markov, Dan-Adrian Alistarh, Daniel Roy, and Ali Ramezani-Kebrya. “Adaptive Gradient Quantization for Data-Parallel SGD.” In Advances in Neural Information Processing Systems, Vol. 33. Neural Information Processing Systems Foundation, 2020.
[Preprint] View | Download Preprint (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: Chicago

Export / Embed

Grants


7 Publications

Mark all

[7]
2024 | Published | Thesis | IST-REx-ID: 17490 | OA
Markov, Ilia. “Communication-Efficient Distributed Training of Deep Neural Networks : An Algorithms and Systems Perspective.” Institute of Science and Technology Austria, 2024. https://doi.org/10.15479/at:ista:17490.
[Published Version] View | Files available | DOI
 
[6]
2024 | Published | Conference Paper | IST-REx-ID: 17456 | OA
Markov, Ilia, Kaveh Alimohammadi, Elias Frantar, and Dan-Adrian Alistarh. “L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient Data-Parallel Deep Learning.” In Proceedings of Machine Learning and Systems , edited by P. Gibbons, G. Pekhimenko, and C. De Sa, Vol. 6. Association for Computing Machinery, 2024.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[5]
2023 | Published | Conference Paper | IST-REx-ID: 14461 | OA
Markov, Ilia, Adrian Vladu, Qi Guo, and Dan-Adrian Alistarh. “Quantized Distributed Training of Large Models with Convergence Guarantees.” In Proceedings of the 40th International Conference on Machine Learning, 202:24020–44. ML Research Press, 2023.
[Preprint] View | Files available | Download Preprint (ext.) | arXiv
 
[4]
2022 | Published | Conference Paper | IST-REx-ID: 12780 | OA
Markov, Ilia, Hamidreza Ramezanikebrya, and Dan-Adrian Alistarh. “CGX: Adaptive System Support for Communication-Efficient Deep Learning.” In Proceedings of the 23rd ACM/IFIP International Middleware Conference, 241–54. Association for Computing Machinery, 2022. https://doi.org/10.1145/3528535.3565248.
[Published Version] View | Files available | DOI | arXiv
 
[3]
2021 | Published | Conference Paper | IST-REx-ID: 10432 | OA
Nadiradze, Giorgi, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, and Dan-Adrian Alistarh. “Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent.” In Proceedings of the AAAI Conference on Artificial Intelligence, 35:9037–45, 2021.
[Published Version] View | Files available | Download Published Version (ext.) | arXiv
 
[2]
2021 | Published | Conference Paper | IST-REx-ID: 10049 | OA
Klein, Karen, Guillermo Pascual Perez, Michael Walter, Chethan Kamath Hosdurg, Margarita Capretto, Miguel Cueto Noval, Ilia Markov, Michelle X Yeo, Joel F Alwen, and Krzysztof Z Pietrzak. “Keep the Dirt: Tainted TreeKEM, Adaptively and Actively Secure Continuous Group Key Agreement.” In 2021 IEEE Symposium on Security and Privacy , 268–84. IEEE, 2021. https://doi.org/10.1109/sp40001.2021.00035.
[Preprint] View | Files available | DOI | Download Preprint (ext.)
 
[1]
2020 | Published | Conference Paper | IST-REx-ID: 15086 | OA
Faghri, Fartash , Iman Tabrizian, Ilia Markov, Dan-Adrian Alistarh, Daniel Roy, and Ali Ramezani-Kebrya. “Adaptive Gradient Quantization for Data-Parallel SGD.” In Advances in Neural Information Processing Systems, Vol. 33. Neural Information Processing Systems Foundation, 2020.
[Preprint] View | Download Preprint (ext.) | arXiv
 

Search

Filter Publications

Display / Sort

Citation Style: Chicago

Export / Embed