{"conference":{"end_date":"2021-05-07","name":" ICLR: International Conference on Learning Representations","start_date":"2021-05-03","location":"Virtual"},"main_file_link":[{"open_access":"1","url":"https://openreview.net/pdf?id=t86MwoUCCNe"}],"date_updated":"2023-02-23T14:00:40Z","quality_controlled":"1","project":[{"_id":"260C2330-B435-11E9-9278-68D0E5697425","grant_number":"754411","name":"ISTplus - Postdoctoral Fellowships","call_identifier":"H2020"}],"publication":"9th International Conference on Learning Representations","_id":"9543","abstract":[{"text":"We consider the problem ofdistributed mean estimation (DME), in which n machines are each given a local d-dimensional vector xv∈Rd, and must cooperate to estimate the mean of their inputs μ=1n∑nv=1xv, while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output μ, but μ itself has large norm. In such cases, previous output error bounds perform poorly. In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under l2-norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor ind. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches.","lang":"eng"}],"citation":{"mla":"Davies, Peter, et al. “New Bounds for Distributed Mean Estimation and Variance Reduction.” 9th International Conference on Learning Representations, 2021.","apa":"Davies, P., Gurunanthan, V., Moshrefi, N., Ashkboos, S., & Alistarh, D.-A. (2021). New bounds for distributed mean estimation and variance reduction. In 9th International Conference on Learning Representations. Virtual.","ieee":"P. Davies, V. Gurunanthan, N. Moshrefi, S. Ashkboos, and D.-A. Alistarh, “New bounds for distributed mean estimation and variance reduction,” in 9th International Conference on Learning Representations, Virtual, 2021.","chicago":"Davies, Peter, Vijaykrishna Gurunanthan, Niusha Moshrefi, Saleh Ashkboos, and Dan-Adrian Alistarh. “New Bounds for Distributed Mean Estimation and Variance Reduction.” In 9th International Conference on Learning Representations, 2021.","ama":"Davies P, Gurunanthan V, Moshrefi N, Ashkboos S, Alistarh D-A. New bounds for distributed mean estimation and variance reduction. In: 9th International Conference on Learning Representations. ; 2021.","short":"P. Davies, V. Gurunanthan, N. Moshrefi, S. Ashkboos, D.-A. Alistarh, in:, 9th International Conference on Learning Representations, 2021.","ista":"Davies P, Gurunanthan V, Moshrefi N, Ashkboos S, Alistarh D-A. 2021. New bounds for distributed mean estimation and variance reduction. 9th International Conference on Learning Representations. ICLR: International Conference on Learning Representations."},"author":[{"id":"11396234-BB50-11E9-B24C-90FCE5697425","orcid":"0000-0002-5646-9524","first_name":"Peter","last_name":"Davies","full_name":"Davies, Peter"},{"first_name":"Vijaykrishna","last_name":"Gurunanthan","full_name":"Gurunanthan, Vijaykrishna"},{"first_name":"Niusha ","id":"4db776ff-ce15-11eb-96e3-bc2b90b01c16","full_name":"Moshrefi, Niusha ","last_name":"Moshrefi"},{"full_name":"Ashkboos, Saleh","last_name":"Ashkboos","first_name":"Saleh","id":"0D0A9058-257B-11EA-A937-9341C3D8BC8A"},{"full_name":"Alistarh, Dan-Adrian","last_name":"Alistarh","first_name":"Dan-Adrian","orcid":"0000-0003-3650-940X","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87"}],"department":[{"_id":"DaAl"}],"publication_status":"published","title":"New bounds for distributed mean estimation and variance reduction","status":"public","ec_funded":1,"external_id":{"arxiv":["2002.09268"]},"month":"05","day":"01","year":"2021","type":"conference","oa":1,"article_processing_charge":"No","user_id":"D865714E-FA4E-11E9-B85B-F5C5E5697425","date_published":"2021-05-01T00:00:00Z","date_created":"2021-06-10T19:46:08Z","oa_version":"Published Version","language":[{"iso":"eng"}]}