Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform
Baskin C, Liss N, Zheltonozhskii E, Bronstein AM, Mendelson A. 2018. Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform. 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). 32nd IEEE International Parallel and Distributed Processing Symposium Workshops, 8425399.
Download (ext.)
https://doi.org/10.48550/arXiv.1708.00052
[Preprint]
Conference Paper
| Published
| English
Scopus indexed
Author
Baskin, Chaim;
Liss, Natan;
Zheltonozhskii, Evgenii;
Bronstein, Alex M.ISTA ;
Mendelson, Avi
Abstract
Deep neural networks (DNNs) are used by different applications that are executed on a range of computer architectures, from IoT devices to supercomputers. The footprint of these networks is huge as well as their computational and communication needs. In order to ease the pressure on resources, research indicates that in many cases a low precision representation (1-2 bit per parameter) of weights and other parameters can achieve similar accuracy while requiring less resources. Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e.g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers. This paper presents a new streaming architecture for running QNNs on FPGAs. The proposed architecture scales out better than alternatives, allowing us to take advantage of systems with multiple FPGAs. We also included support for skip connections, that are used in state-of-the art NNs, and shown that our architecture allows to add those connections almost for free. All this allowed us to implement an 18-layer ResNet for 224×224 images classification, achieving 57.5% top-1 accuracy. In addition, we implemented a full-sized quantized AlexNet. In contrast to previous works, we use 2-bit activations instead of 1-bit ones, which improves AlexNet's top-1 accuracy from 41.8% to 51.03% for the ImageNet classification. Both AlexNet and ResNet can handle 1000-class real-time classification on an FPGA. Our implementation of ResNet-18 consumes 5× less power and is 4× slower for ImageNet, when compared to the same NN on the latest Nvidia GPUs. Smaller NNs, that fit a single FPGA, are running faster then on GPUs on small (32×32) inputs, while consuming up to 20× less energy and power.
Publishing Year
Date Published
2018-08-06
Proceedings Title
2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
Publisher
IEEE
Article Number
8425399
Conference
32nd IEEE International Parallel and Distributed Processing Symposium Workshops
Conference Location
Vancouver, BC, Canada
Conference Date
2018-05-21 – 2018-05-25
IST-REx-ID
Cite this
Baskin C, Liss N, Zheltonozhskii E, Bronstein AM, Mendelson A. Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform. In: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE; 2018. doi:10.1109/ipdpsw.2018.00032
Baskin, C., Liss, N., Zheltonozhskii, E., Bronstein, A. M., & Mendelson, A. (2018). Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). Vancouver, BC, Canada: IEEE. https://doi.org/10.1109/ipdpsw.2018.00032
Baskin, Chaim, Natan Liss, Evgenii Zheltonozhskii, Alex M. Bronstein, and Avi Mendelson. “Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform.” In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2018. https://doi.org/10.1109/ipdpsw.2018.00032.
C. Baskin, N. Liss, E. Zheltonozhskii, A. M. Bronstein, and A. Mendelson, “Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform,” in 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Vancouver, BC, Canada, 2018.
Baskin C, Liss N, Zheltonozhskii E, Bronstein AM, Mendelson A. 2018. Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform. 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). 32nd IEEE International Parallel and Distributed Processing Symposium Workshops, 8425399.
Baskin, Chaim, et al. “Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform.” 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 8425399, IEEE, 2018, doi:10.1109/ipdpsw.2018.00032.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Link(s) to Main File(s)
Access Level
Open Access
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 1708.00052