New bounds for distributed mean estimation and variance reduction

Davies P, Gurunanthan V, Moshrefi N, Ashkboos S, Alistarh D-A. 2021. New bounds for distributed mean estimation and variance reduction. 9th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.

Download (ext.)
Conference Paper | Published | English

Corresponding author has ISTA affiliation

Department
Abstract
We consider the problem ofdistributed mean estimation (DME), in which n machines are each given a local d-dimensional vector xv∈Rd, and must cooperate to estimate the mean of their inputs μ=1n∑nv=1xv, while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output μ, but μ itself has large norm. In such cases, previous output error bounds perform poorly. In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under l2-norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor ind. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches.
Publishing Year
Date Published
2021-05-01
Proceedings Title
9th International Conference on Learning Representations
Conference
ICLR: International Conference on Learning Representations
Conference Location
Virtual
Conference Date
2021-05-03 – 2021-05-07
IST-REx-ID

Cite this

Davies P, Gurunanthan V, Moshrefi N, Ashkboos S, Alistarh D-A. New bounds for distributed mean estimation and variance reduction. In: 9th International Conference on Learning Representations. ; 2021.
Davies, P., Gurunanthan, V., Moshrefi, N., Ashkboos, S., & Alistarh, D.-A. (2021). New bounds for distributed mean estimation and variance reduction. In 9th International Conference on Learning Representations. Virtual.
Davies, Peter, Vijaykrishna Gurunanthan, Niusha Moshrefi, Saleh Ashkboos, and Dan-Adrian Alistarh. “New Bounds for Distributed Mean Estimation and Variance Reduction.” In 9th International Conference on Learning Representations, 2021.
P. Davies, V. Gurunanthan, N. Moshrefi, S. Ashkboos, and D.-A. Alistarh, “New bounds for distributed mean estimation and variance reduction,” in 9th International Conference on Learning Representations, Virtual, 2021.
Davies P, Gurunanthan V, Moshrefi N, Ashkboos S, Alistarh D-A. 2021. New bounds for distributed mean estimation and variance reduction. 9th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.
Davies, Peter, et al. “New Bounds for Distributed Mean Estimation and Variance Reduction.” 9th International Conference on Learning Representations, 2021.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Sources

arXiv 2002.09268

Search this title in

Google Scholar