Distributed learning over unreliable networks
Yu C, Tang H, Renggli C, Kassing S, Singla A, Alistarh D-A, Zhang C, Liu J. 2019. Distributed learning over unreliable networks. 36th International Conference on Machine Learning, ICML 2019. ICML: International Conference on Machine Learning vol. 2019–June, 12481–12512.
Download (ext.)
https://arxiv.org/abs/1810.07766
[Preprint]
Conference Paper
| Published
| English
Scopus indexed
Author
Yu, Chen;
Tang, Hanlin;
Renggli, Cedric;
Kassing, Simon;
Singla, Ankit;
Alistarh, Dan-AdrianISTA ;
Zhang, Ce;
Liu, Ji
Department
Abstract
Most of today's distributed machine learning systems assume reliable networks: whenever two machines exchange information (e.g., gradients or models), the network should guarantee the delivery of the message. At the same time, recent work exhibits the impressive tolerance of machine learning algorithms to errors or noise arising from relaxed communication or synchronization. In this paper, we connect these two trends, and consider the following question: Can we design machine learning systems that are tolerant to network unreliability during training? With this motivation, we focus on a theoretical problem of independent interest-given a standard distributed parameter server architecture, if every communication between the worker and the server has a non-zero probability p of being dropped, does there exist an algorithm that still converges, and at what speed? The technical contribution of this paper is a novel theoretical analysis proving that distributed learning over unreliable network can achieve comparable convergence rate to centralized or distributed learning over reliable networks. Further, we prove that the influence of the packet drop rate diminishes with the growth of the number of parameter servers. We map this theoretical result onto a real-world scenario, training deep neural networks over an unreliable network layer, and conduct network simulation to validate the system improvement by allowing the networks to be unreliable.
Publishing Year
Date Published
2019-06-01
Proceedings Title
36th International Conference on Machine Learning, ICML 2019
Publisher
IMLS
Volume
2019-June
Page
12481-12512
Conference
ICML: International Conference on Machine Learning
Conference Location
Long Beach, CA, United States
Conference Date
2019-06-10 – 2019-06-15
ISBN
IST-REx-ID
Cite this
Yu C, Tang H, Renggli C, et al. Distributed learning over unreliable networks. In: 36th International Conference on Machine Learning, ICML 2019. Vol 2019-June. IMLS; 2019:12481-12512.
Yu, C., Tang, H., Renggli, C., Kassing, S., Singla, A., Alistarh, D.-A., … Liu, J. (2019). Distributed learning over unreliable networks. In 36th International Conference on Machine Learning, ICML 2019 (Vol. 2019–June, pp. 12481–12512). Long Beach, CA, United States: IMLS.
Yu, Chen, Hanlin Tang, Cedric Renggli, Simon Kassing, Ankit Singla, Dan-Adrian Alistarh, Ce Zhang, and Ji Liu. “Distributed Learning over Unreliable Networks.” In 36th International Conference on Machine Learning, ICML 2019, 2019–June:12481–512. IMLS, 2019.
C. Yu et al., “Distributed learning over unreliable networks,” in 36th International Conference on Machine Learning, ICML 2019, Long Beach, CA, United States, 2019, vol. 2019–June, pp. 12481–12512.
Yu C, Tang H, Renggli C, Kassing S, Singla A, Alistarh D-A, Zhang C, Liu J. 2019. Distributed learning over unreliable networks. 36th International Conference on Machine Learning, ICML 2019. ICML: International Conference on Machine Learning vol. 2019–June, 12481–12512.
Yu, Chen, et al. “Distributed Learning over Unreliable Networks.” 36th International Conference on Machine Learning, ICML 2019, vol. 2019–June, IMLS, 2019, pp. 12481–512.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Link(s) to Main File(s)
Access Level
Open Access
Export
Marked PublicationsOpen Data ISTA Research Explorer
Web of Science
View record in Web of Science®Sources
arXiv 1810.07766