{"scopus_import":"1","publication":"2018 IEEE Conference on Decision and Control","year":"2019","oa_version":"None","citation":{"apa":"Khirirat, S., Johansson, M., & Alistarh, D.-A. (2019). Gradient compression for communication-limited convex optimization. In 2018 IEEE Conference on Decision and Control. Miami Beach, FL, United States: IEEE. https://doi.org/10.1109/cdc.2018.8619625","short":"S. Khirirat, M. Johansson, D.-A. Alistarh, in:, 2018 IEEE Conference on Decision and Control, IEEE, 2019.","chicago":"Khirirat, Sarit, Mikael Johansson, and Dan-Adrian Alistarh. “Gradient Compression for Communication-Limited Convex Optimization.” In 2018 IEEE Conference on Decision and Control. IEEE, 2019. https://doi.org/10.1109/cdc.2018.8619625.","ieee":"S. Khirirat, M. Johansson, and D.-A. Alistarh, “Gradient compression for communication-limited convex optimization,” in 2018 IEEE Conference on Decision and Control, Miami Beach, FL, United States, 2019.","mla":"Khirirat, Sarit, et al. “Gradient Compression for Communication-Limited Convex Optimization.” 2018 IEEE Conference on Decision and Control, 8619625, IEEE, 2019, doi:10.1109/cdc.2018.8619625.","ista":"Khirirat S, Johansson M, Alistarh D-A. 2019. Gradient compression for communication-limited convex optimization. 2018 IEEE Conference on Decision and Control. CDC: Conference on Decision and Control, 8619625.","ama":"Khirirat S, Johansson M, Alistarh D-A. Gradient compression for communication-limited convex optimization. In: 2018 IEEE Conference on Decision and Control. IEEE; 2019. doi:10.1109/cdc.2018.8619625"},"status":"public","type":"conference","department":[{"_id":"DaAl"}],"article_number":"8619625","publication_identifier":{"isbn":["9781538613955"],"issn":["0743-1546"]},"month":"01","title":"Gradient compression for communication-limited convex optimization","publisher":"IEEE","doi":"10.1109/cdc.2018.8619625","article_processing_charge":"No","day":"21","publication_status":"published","author":[{"last_name":"Khirirat","full_name":"Khirirat, Sarit","first_name":"Sarit"},{"last_name":"Johansson","first_name":"Mikael","full_name":"Johansson, Mikael"},{"first_name":"Dan-Adrian","full_name":"Alistarh, Dan-Adrian","last_name":"Alistarh","orcid":"0000-0003-3650-940X","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87"}],"isi":1,"date_updated":"2023-09-06T11:14:55Z","_id":"7122","abstract":[{"lang":"eng","text":"Data-rich applications in machine-learning and control have motivated an intense research on large-scale optimization. Novel algorithms have been proposed and shown to have optimal convergence rates in terms of iteration counts. However, their practical performance is severely degraded by the cost of exchanging high-dimensional gradient vectors between computing nodes. Several gradient compression heuristics have recently been proposed to reduce communications, but few theoretical results exist that quantify how they impact algorithm convergence. This paper establishes and strengthens the convergence guarantees for gradient descent under a family of gradient compression techniques. For convex optimization problems, we derive admissible step sizes and quantify both the number of iterations and the number of bits that need to be exchanged to reach a target accuracy. Finally, we validate the performance of different gradient compression techniques in simulations. The numerical results highlight the properties of different gradient compression algorithms and confirm that fast convergence with limited information exchange is possible."}],"conference":{"location":"Miami Beach, FL, United States","name":"CDC: Conference on Decision and Control","start_date":"2018-12-17","end_date":"2018-12-19"},"quality_controlled":"1","user_id":"c635000d-4b10-11ee-a964-aac5a93f6ac1","date_created":"2019-11-26T15:07:49Z","language":[{"iso":"eng"}],"date_published":"2019-01-21T00:00:00Z","external_id":{"isi":["000458114800023"]}}