Byzantine stochastic gradient descent

Alistarh D-A, Allen-Zhu Z, Li J. 2018. Byzantine stochastic gradient descent. Advances in Neural Information Processing Systems. NeurIPS: Conference on Neural Information Processing Systems vol. 2018, 4613–4623.

Download (ext.)
OA https://arxiv.org/abs/1803.08917 [Published Version]
Conference Paper | Published | English

Scopus indexed
Author
Alistarh, Dan-AdrianISTA ; Allen-Zhu, Zeyuan; Li, Jerry
Department
Abstract
This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of m machines which allegedly compute stochastic gradients every iteration, an α-fraction are Byzantine, and may behave adversarially. Our main result is a variant of stochastic gradient descent (SGD) which finds ε-approximate minimizers of convex functions in T=O~(1/ε²m+α²/ε²) iterations. In contrast, traditional mini-batch SGD needs T=O(1/ε²m) iterations, but cannot tolerate Byzantine failures. Further, we provide a lower bound showing that, up to logarithmic factors, our algorithm is information-theoretically optimal both in terms of sample complexity and time complexity.
Publishing Year
Date Published
2018-12-01
Proceedings Title
Advances in Neural Information Processing Systems
Publisher
Neural Information Processing Systems Foundation
Volume
2018
Page
4613-4623
Conference
NeurIPS: Conference on Neural Information Processing Systems
Conference Location
Montreal, Canada
Conference Date
2018-12-02 – 2018-12-08
IST-REx-ID

Cite this

Alistarh D-A, Allen-Zhu Z, Li J. Byzantine stochastic gradient descent. In: Advances in Neural Information Processing Systems. Vol 2018. Neural Information Processing Systems Foundation; 2018:4613-4623.
Alistarh, D.-A., Allen-Zhu, Z., & Li, J. (2018). Byzantine stochastic gradient descent. In Advances in Neural Information Processing Systems (Vol. 2018, pp. 4613–4623). Montreal, Canada: Neural Information Processing Systems Foundation.
Alistarh, Dan-Adrian, Zeyuan Allen-Zhu, and Jerry Li. “Byzantine Stochastic Gradient Descent.” In Advances in Neural Information Processing Systems, 2018:4613–23. Neural Information Processing Systems Foundation, 2018.
D.-A. Alistarh, Z. Allen-Zhu, and J. Li, “Byzantine stochastic gradient descent,” in Advances in Neural Information Processing Systems, Montreal, Canada, 2018, vol. 2018, pp. 4613–4623.
Alistarh D-A, Allen-Zhu Z, Li J. 2018. Byzantine stochastic gradient descent. Advances in Neural Information Processing Systems. NeurIPS: Conference on Neural Information Processing Systems vol. 2018, 4613–4623.
Alistarh, Dan-Adrian, et al. “Byzantine Stochastic Gradient Descent.” Advances in Neural Information Processing Systems, vol. 2018, Neural Information Processing Systems Foundation, 2018, pp. 4613–23.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Web of Science

View record in Web of Science®

Sources

arXiv 1803.08917

Search this title in

Google Scholar