FPGA-accelerated dense linear machine learning: A precision-convergence trade-off
Kara K, Alistarh D-A, Alonso G, Mutlu O, Zhang C. 2017. FPGA-accelerated dense linear machine learning: A precision-convergence trade-off. FCCM: Field-Programmable Custom Computing Machines, 160–167.
Download
No fulltext has been uploaded. References only!
Conference Paper
| Published
| English
Author
Kara, Kaan;
Alistarh, Dan-AdrianISTA ;
Alonso, Gustavo;
Mutlu, Onur;
Zhang, Ce
Abstract
Stochastic gradient descent (SGD) is a commonly used algorithm for training linear machine learning models. Based on vector algebra, it benefits from the inherent parallelism available in an FPGA. In this paper, we first present a single-precision floating-point SGD implementation on an FPGA that provides similar performance as a 10-core CPU. We then adapt the design to make it capable of processing low-precision data. The low-precision data is obtained from a novel compression scheme - called stochastic quantization, specifically designed for machine learning applications. We test both full-precision and low-precision designs on various regression and classification data sets. We achieve up to an order of magnitude training speedup when using low-precision data compared to a full-precision SGD on the same FPGA and a state-of-the-art multi-core solution, while maintaining the quality of training. We open source the designs presented in this paper.
Publishing Year
Date Published
2017-06-30
Publisher
IEEE
Page
160 - 167
Conference
FCCM: Field-Programmable Custom Computing Machines
IST-REx-ID
Cite this
Kara K, Alistarh D-A, Alonso G, Mutlu O, Zhang C. FPGA-accelerated dense linear machine learning: A precision-convergence trade-off. In: IEEE; 2017:160-167. doi:10.1109/FCCM.2017.39
Kara, K., Alistarh, D.-A., Alonso, G., Mutlu, O., & Zhang, C. (2017). FPGA-accelerated dense linear machine learning: A precision-convergence trade-off (pp. 160–167). Presented at the FCCM: Field-Programmable Custom Computing Machines, IEEE. https://doi.org/10.1109/FCCM.2017.39
Kara, Kaan, Dan-Adrian Alistarh, Gustavo Alonso, Onur Mutlu, and Ce Zhang. “FPGA-Accelerated Dense Linear Machine Learning: A Precision-Convergence Trade-Off,” 160–67. IEEE, 2017. https://doi.org/10.1109/FCCM.2017.39.
K. Kara, D.-A. Alistarh, G. Alonso, O. Mutlu, and C. Zhang, “FPGA-accelerated dense linear machine learning: A precision-convergence trade-off,” presented at the FCCM: Field-Programmable Custom Computing Machines, 2017, pp. 160–167.
Kara K, Alistarh D-A, Alonso G, Mutlu O, Zhang C. 2017. FPGA-accelerated dense linear machine learning: A precision-convergence trade-off. FCCM: Field-Programmable Custom Computing Machines, 160–167.
Kara, Kaan, et al. FPGA-Accelerated Dense Linear Machine Learning: A Precision-Convergence Trade-Off. IEEE, 2017, pp. 160–67, doi:10.1109/FCCM.2017.39.