Feature map transform coding for energy-efficient CNN inference

Chmiel B, Baskin C, Zheltonozhskii E, Banner R, Yermolin Y, Karbachevsky A, Bronstein AM, Mendelson A. 2020. Feature map transform coding for energy-efficient CNN inference. 2020 International Joint Conference on Neural Networks (IJCNN). International Joint Conference on Neural Networks, 9206968.

Download (ext.)

Conference Paper | Published | English

Scopus indexed
Author
Chmiel, Brian; Baskin, Chaim; Zheltonozhskii, Evgenii; Banner, Ron; Yermolin, Yevgeny; Karbachevsky, Alex; Bronstein, Alex M.ISTA ; Mendelson, Avi
Abstract
Convolutional neural networks (CNNs) achieve state-of-the-art accuracy in a variety of tasks in computer vision and beyond. One of the major obstacles hindering the ubiquitous use of CNNs for inference on low-power edge devices is their high computational complexity and memory bandwidth requirements. The latter often dominates the energy footprint on modern hardware. In this paper, we introduce a lossy transform coding approach, inspired by image and video compression, designed to reduce the memory bandwidth due to the storage of intermediate activation calculation results. Our method does not require fine-tuning the network weights and halves the data transfer volumes to the main memory by compressing feature maps, which are highly correlated, with variable length coding. Our method outperform previous approach in term of the number of bits per value with minor accuracy degradation on ResNet-34 and MobileNetV2. We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy. When allowing accuracy degradation of up to 2%, the reduction of 60% is achieved. A reference implementation accompanies the paper.
Publishing Year
Date Published
2020-09-28
Proceedings Title
2020 International Joint Conference on Neural Networks (IJCNN)
Publisher
IEEE
Article Number
9206968
Conference
International Joint Conference on Neural Networks
Conference Location
Glasgow, United Kingdom
Conference Date
2020-07-19 – 2020-07-24
eISSN
IST-REx-ID

Cite this

Chmiel B, Baskin C, Zheltonozhskii E, et al. Feature map transform coding for energy-efficient CNN inference. In: 2020 International Joint Conference on Neural Networks (IJCNN). IEEE; 2020. doi:10.1109/ijcnn48605.2020.9206968
Chmiel, B., Baskin, C., Zheltonozhskii, E., Banner, R., Yermolin, Y., Karbachevsky, A., … Mendelson, A. (2020). Feature map transform coding for energy-efficient CNN inference. In 2020 International Joint Conference on Neural Networks (IJCNN). Glasgow, United Kingdom: IEEE. https://doi.org/10.1109/ijcnn48605.2020.9206968
Chmiel, Brian, Chaim Baskin, Evgenii Zheltonozhskii, Ron Banner, Yevgeny Yermolin, Alex Karbachevsky, Alex M. Bronstein, and Avi Mendelson. “Feature Map Transform Coding for Energy-Efficient CNN Inference.” In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. https://doi.org/10.1109/ijcnn48605.2020.9206968.
B. Chmiel et al., “Feature map transform coding for energy-efficient CNN inference,” in 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020.
Chmiel B, Baskin C, Zheltonozhskii E, Banner R, Yermolin Y, Karbachevsky A, Bronstein AM, Mendelson A. 2020. Feature map transform coding for energy-efficient CNN inference. 2020 International Joint Conference on Neural Networks (IJCNN). International Joint Conference on Neural Networks, 9206968.
Chmiel, Brian, et al. “Feature Map Transform Coding for Energy-Efficient CNN Inference.” 2020 International Joint Conference on Neural Networks (IJCNN), 9206968, IEEE, 2020, doi:10.1109/ijcnn48605.2020.9206968.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Sources

arXiv 1905.10830

Search this title in

Google Scholar
ISBN Search