{"page":"24020-24044","ec_funded":1,"intvolume":" 202","language":[{"iso":"eng"}],"date_created":"2023-10-29T23:01:17Z","project":[{"grant_number":"805223","_id":"268A44D6-B435-11E9-9278-68D0E5697425","call_identifier":"H2020","name":"Elastic Coordination for Scalable Machine Learning"}],"publisher":"ML Research Press","date_updated":"2024-10-09T21:07:10Z","_id":"14461","author":[{"id":"D0CF4148-C985-11E9-8066-0BDEE5697425","first_name":"Ilia","full_name":"Markov, Ilia","last_name":"Markov"},{"last_name":"Vladu","first_name":"Adrian","full_name":"Vladu, Adrian"},{"first_name":"Qi","full_name":"Guo, Qi","last_name":"Guo"},{"id":"4A899BFC-F248-11E8-B48F-1D18A9856A87","first_name":"Dan-Adrian","orcid":"0000-0003-3650-940X","full_name":"Alistarh, Dan-Adrian","last_name":"Alistarh"}],"volume":202,"quality_controlled":"1","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","oa":1,"oa_version":"Preprint","publication":"Proceedings of the 40th International Conference on Machine Learning","citation":{"short":"I. Markov, A. Vladu, Q. Guo, D.-A. Alistarh, in:, Proceedings of the 40th International Conference on Machine Learning, ML Research Press, 2023, pp. 24020–24044.","mla":"Markov, Ilia, et al. “Quantized Distributed Training of Large Models with Convergence Guarantees.” Proceedings of the 40th International Conference on Machine Learning, vol. 202, ML Research Press, 2023, pp. 24020–44.","chicago":"Markov, Ilia, Adrian Vladu, Qi Guo, and Dan-Adrian Alistarh. “Quantized Distributed Training of Large Models with Convergence Guarantees.” In Proceedings of the 40th International Conference on Machine Learning, 202:24020–44. ML Research Press, 2023.","ieee":"I. Markov, A. Vladu, Q. Guo, and D.-A. Alistarh, “Quantized distributed training of large models with convergence guarantees,” in Proceedings of the 40th International Conference on Machine Learning, Honolulu, Hawaii, HI, United States, 2023, vol. 202, pp. 24020–24044.","ama":"Markov I, Vladu A, Guo Q, Alistarh D-A. Quantized distributed training of large models with convergence guarantees. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:24020-24044.","apa":"Markov, I., Vladu, A., Guo, Q., & Alistarh, D.-A. (2023). Quantized distributed training of large models with convergence guarantees. In Proceedings of the 40th International Conference on Machine Learning (Vol. 202, pp. 24020–24044). Honolulu, Hawaii, HI, United States: ML Research Press.","ista":"Markov I, Vladu A, Guo Q, Alistarh D-A. 2023. Quantized distributed training of large models with convergence guarantees. Proceedings of the 40th International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 202, 24020–24044."},"year":"2023","alternative_title":["PMLR"],"department":[{"_id":"DaAl"}],"abstract":[{"lang":"eng","text":"Communication-reduction techniques are a popular way to improve scalability in data-parallel training of deep neural networks (DNNs). The recent emergence of large language models such as GPT has created the need for new approaches to exploit data-parallelism. Among these, fully-sharded data parallel (FSDP) training is highly popular, yet it still encounters scalability bottlenecks. One reason is that applying compression techniques to FSDP is challenging: as the vast majority of the communication involves the model’s weights, direct compression alters convergence and leads to accuracy loss. We present QSDP, a variant of FSDP which supports both gradient and weight quantization with theoretical guarantees, is simple to implement and has essentially no overheads. To derive QSDP we prove that a natural modification of SGD achieves convergence even when we only maintain quantized weights, and thus the domain over which we train consists of quantized points and is, therefore, highly non-convex. We validate this approach by training GPT-family models with up to 1.3 billion parameters on a multi-node cluster. Experiments show that QSDP preserves model accuracy, while completely removing the communication bottlenecks of FSDP, providing end-to-end speedups of up to 2.2x."}],"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2302.02390"}],"acknowledgement":"The authors gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML), as well as experimental support from the IST Austria IT department, in particular Stefano Elefante, Andrei Hornoiu, and Alois Schloegl. AV acknowledges the support of the French Agence Nationale de la Recherche (ANR), under grant ANR-21-CE48-0016 (project COMCOPT), the support of Fondation Hadamard with a PRMO grant, and the support of CNRS with a CoopIntEER IEA grant (project ALFRED).","status":"public","acknowledged_ssus":[{"_id":"ScienComp"}],"publication_status":"published","publication_identifier":{"eissn":["2640-3498"]},"corr_author":"1","related_material":{"record":[{"id":"17490","status":"public","relation":"dissertation_contains"}]},"article_processing_charge":"No","day":"30","month":"07","external_id":{"arxiv":["2302.02390"]},"type":"conference","title":"Quantized distributed training of large models with convergence guarantees","conference":{"end_date":"2023-07-29","start_date":"2023-07-23","location":"Honolulu, Hawaii, HI, United States","name":"ICML: International Conference on Machine Learning"},"date_published":"2023-07-30T00:00:00Z","scopus_import":"1"}