{"oa":1,"date_created":"2019-11-26T14:19:11Z","page":"145-156","article_processing_charge":"No","doi":"10.5441/002/EDBT.2018.14","department":[{"_id":"DaAl"}],"publication_status":"published","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","month":"03","title":"Synchronous multi-GPU training for deep learning with low-precision communications: An empirical study","quality_controlled":"1","_id":"7116","publication_identifier":{"issn":["2367-2005"],"isbn":["9783893180783"]},"file_date_updated":"2020-07-14T12:47:49Z","author":[{"first_name":"Demjan","last_name":"Grubic","full_name":"Grubic, Demjan"},{"first_name":"Leo","full_name":"Tam, Leo","last_name":"Tam"},{"full_name":"Alistarh, Dan-Adrian","last_name":"Alistarh","first_name":"Dan-Adrian","orcid":"0000-0003-3650-940X","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87"},{"last_name":"Zhang","full_name":"Zhang, Ce","first_name":"Ce"}],"has_accepted_license":"1","conference":{"end_date":"2018-03-29","location":"Vienna, Austria","start_date":"2018-03-26","name":"EDBT: Conference on Extending Database Technology"},"scopus_import":1,"status":"public","day":"26","publication":"Proceedings of the 21st International Conference on Extending Database Technology","date_updated":"2024-10-09T20:59:05Z","publisher":"OpenProceedings","corr_author":"1","tmp":{"image":"/images/cc_by_nc_nd.png","short":"CC BY-NC-ND (4.0)","legal_code_url":"https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode","name":"Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)"},"file":[{"access_level":"open_access","creator":"dernst","date_updated":"2020-07-14T12:47:49Z","file_id":"7118","date_created":"2019-11-26T14:23:04Z","file_size":1603204,"relation":"main_file","content_type":"application/pdf","file_name":"2018_OpenProceedings_Grubic.pdf","checksum":"ec979b56abc71016d6e6adfdadbb4afe"}],"date_published":"2018-03-26T00:00:00Z","oa_version":"Published Version","year":"2018","ddc":["000"],"type":"conference","language":[{"iso":"eng"}],"license":"https://creativecommons.org/licenses/by-nc-nd/4.0/","citation":{"apa":"Grubic, D., Tam, L., Alistarh, D.-A., & Zhang, C. (2018). Synchronous multi-GPU training for deep learning with low-precision communications: An empirical study. In Proceedings of the 21st International Conference on Extending Database Technology (pp. 145–156). Vienna, Austria: OpenProceedings. https://doi.org/10.5441/002/EDBT.2018.14","chicago":"Grubic, Demjan, Leo Tam, Dan-Adrian Alistarh, and Ce Zhang. “Synchronous Multi-GPU Training for Deep Learning with Low-Precision Communications: An Empirical Study.” In Proceedings of the 21st International Conference on Extending Database Technology, 145–56. OpenProceedings, 2018. https://doi.org/10.5441/002/EDBT.2018.14.","ieee":"D. Grubic, L. Tam, D.-A. Alistarh, and C. Zhang, “Synchronous multi-GPU training for deep learning with low-precision communications: An empirical study,” in Proceedings of the 21st International Conference on Extending Database Technology, Vienna, Austria, 2018, pp. 145–156.","ista":"Grubic D, Tam L, Alistarh D-A, Zhang C. 2018. Synchronous multi-GPU training for deep learning with low-precision communications: An empirical study. Proceedings of the 21st International Conference on Extending Database Technology. EDBT: Conference on Extending Database Technology, 145–156.","ama":"Grubic D, Tam L, Alistarh D-A, Zhang C. Synchronous multi-GPU training for deep learning with low-precision communications: An empirical study. In: Proceedings of the 21st International Conference on Extending Database Technology. OpenProceedings; 2018:145-156. doi:10.5441/002/EDBT.2018.14","short":"D. Grubic, L. Tam, D.-A. Alistarh, C. Zhang, in:, Proceedings of the 21st International Conference on Extending Database Technology, OpenProceedings, 2018, pp. 145–156.","mla":"Grubic, Demjan, et al. “Synchronous Multi-GPU Training for Deep Learning with Low-Precision Communications: An Empirical Study.” Proceedings of the 21st International Conference on Extending Database Technology, OpenProceedings, 2018, pp. 145–56, doi:10.5441/002/EDBT.2018.14."},"abstract":[{"lang":"eng","text":"Training deep learning models has received tremendous research interest recently. In particular, there has been intensive research on reducing the communication cost of training when using multiple computational devices, through reducing the precision of the underlying data representation. Naturally, such methods induce system trade-offs—lowering communication precision could de-crease communication overheads and improve scalability; but, on the other hand, it can also reduce the accuracy of training. In this paper, we study this trade-off space, and ask:Can low-precision communication consistently improve the end-to-end performance of training modern neural networks, with no accuracy loss?From the performance point of view, the answer to this question may appear deceptively easy: compressing communication through low precision should help when the ratio between communication and computation is high. However, this answer is less straightforward when we try to generalize this principle across various neural network architectures (e.g., AlexNet vs. ResNet),number of GPUs (e.g., 2 vs. 8 GPUs), machine configurations(e.g., EC2 instances vs. NVIDIA DGX-1), communication primitives (e.g., MPI vs. NCCL), and even different GPU architectures(e.g., Kepler vs. Pascal). Currently, it is not clear how a realistic realization of all these factors maps to the speed up provided by low-precision communication. In this paper, we conduct an empirical study to answer this question and report the insights."}]}