{"title":"Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform","oa_version":"Preprint","date_published":"2018-08-06T00:00:00Z","publisher":"IEEE","user_id":"3E5EF7F0-F248-11E8-B48F-1D18A9856A87","publication":"2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","language":[{"iso":"eng"}],"publication_status":"published","conference":{"location":"Vancouver, BC, Canada","name":"32nd IEEE International Parallel and Distributed Processing Symposium Workshops","end_date":"2018-05-25","start_date":"2018-05-21"},"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.1708.00052"}],"status":"public","day":"06","_id":"18273","citation":{"ama":"Baskin C, Liss N, Zheltonozhskii E, Bronstein AM, Mendelson A. Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform. In: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE; 2018. doi:10.1109/ipdpsw.2018.00032","ieee":"C. Baskin, N. Liss, E. Zheltonozhskii, A. M. Bronstein, and A. Mendelson, “Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform,” in 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Vancouver, BC, Canada, 2018.","short":"C. Baskin, N. Liss, E. Zheltonozhskii, A.M. Bronstein, A. Mendelson, in:, 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), IEEE, 2018.","ista":"Baskin C, Liss N, Zheltonozhskii E, Bronstein AM, Mendelson A. 2018. Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform. 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). 32nd IEEE International Parallel and Distributed Processing Symposium Workshops, 8425399.","chicago":"Baskin, Chaim, Natan Liss, Evgenii Zheltonozhskii, Alex M. Bronstein, and Avi Mendelson. “Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform.” In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2018. https://doi.org/10.1109/ipdpsw.2018.00032.","apa":"Baskin, C., Liss, N., Zheltonozhskii, E., Bronstein, A. M., & Mendelson, A. (2018). Streaming architecture for large-scale quantized neural networks on an FPGA-based dataflow platform. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). Vancouver, BC, Canada: IEEE. https://doi.org/10.1109/ipdpsw.2018.00032","mla":"Baskin, Chaim, et al. “Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform.” 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 8425399, IEEE, 2018, doi:10.1109/ipdpsw.2018.00032."},"abstract":[{"text":"Deep neural networks (DNNs) are used by different applications that are executed on a range of computer architectures, from IoT devices to supercomputers. The footprint of these networks is huge as well as their computational and communication needs. In order to ease the pressure on resources, research indicates that in many cases a low precision representation (1-2 bit per parameter) of weights and other parameters can achieve similar accuracy while requiring less resources. Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e.g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers. This paper presents a new streaming architecture for running QNNs on FPGAs. The proposed architecture scales out better than alternatives, allowing us to take advantage of systems with multiple FPGAs. We also included support for skip connections, that are used in state-of-the art NNs, and shown that our architecture allows to add those connections almost for free. All this allowed us to implement an 18-layer ResNet for 224×224 images classification, achieving 57.5% top-1 accuracy. In addition, we implemented a full-sized quantized AlexNet. In contrast to previous works, we use 2-bit activations instead of 1-bit ones, which improves AlexNet's top-1 accuracy from 41.8% to 51.03% for the ImageNet classification. Both AlexNet and ResNet can handle 1000-class real-time classification on an FPGA. Our implementation of ResNet-18 consumes 5× less power and is 4× slower for ImageNet, when compared to the same NN on the latest Nvidia GPUs. Smaller NNs, that fit a single FPGA, are running faster then on GPUs on small (32×32) inputs, while consuming up to 20× less energy and power.","lang":"eng"}],"article_processing_charge":"No","quality_controlled":"1","doi":"10.1109/ipdpsw.2018.00032","oa":1,"extern":"1","arxiv":1,"article_number":"8425399","type":"conference","date_created":"2024-10-09T07:43:30Z","scopus_import":"1","date_updated":"2024-12-05T14:25:32Z","month":"08","year":"2018","external_id":{"arxiv":["1708.00052"]},"author":[{"full_name":"Baskin, Chaim","first_name":"Chaim","last_name":"Baskin"},{"first_name":"Natan","full_name":"Liss, Natan","last_name":"Liss"},{"last_name":"Zheltonozhskii","full_name":"Zheltonozhskii, Evgenii","first_name":"Evgenii"},{"orcid":"0000-0001-9699-8730","first_name":"Alexander","full_name":"Bronstein, Alexander","id":"58f3726e-7cba-11ef-ad8b-e6e8cb3904e6","last_name":"Bronstein"},{"last_name":"Mendelson","first_name":"Avi","full_name":"Mendelson, Avi"}]}