Adversarial robustness via noise injection in smoothed models

Nemcovsky Y, Zheltonozhskii E, Baskin C, Chmiel B, Bronstein AM, Mendelson A. 2022. Adversarial robustness via noise injection in smoothed models. Applied Intelligence. 53(8), 9483–9498.

Download
No fulltext has been uploaded. References only!

Journal Article | Published | English

Scopus indexed
Author
Nemcovsky, Yaniv; Zheltonozhskii, Evgenii; Baskin, Chaim; Chmiel, Brian; Bronstein, Alex M.ISTA ; Mendelson, Avi
Abstract
Deep neural networks are known to be vulnerable to malicious perturbations. Current methods for improving adversarial robustness make use of either implicit or explicit regularization, with the latter is usually based on adversarial training. Randomized smoothing, the averaging of the classifier outputs over a random distribution centered in the sample, has been shown to guarantee a classifier’s performance subject to bounded perturbations of the input. In this work, we study the application of randomized smoothing to improve performance on unperturbed data and increase robustness to adversarial attacks. We propose to combine smoothing along with adversarial training and randomization approaches, and find that doing so significantly improves the resilience compared to the baseline. We examine our method’s performance on common whitebox (FGSM, PGD) and black-box (transferable attack and NAttack) attacks on CIFAR-10 and CIFAR-100, and determine that for a low number of iterations, smoothing provides a significant performance boost that persists even for perturbations with a high attack norm, . For example, under a PGD-10 attack on CIFAR-10 using Wide-ResNet28-4, we achieve 60.3% accuracy for infinity norm ∞ = 8/255 and 13.1% accuracy for ∞ = 35/255 – outperforming previous art by 3% and 6%, respectively. We achieve nearly twice the accuracy on ∞ = 35/255 and even more so for perturbations with higher infinity norm. A reference implementation of the proposed method is provided.
Publishing Year
Date Published
2022-08-09
Journal Title
Applied Intelligence
Publisher
Springer Nature
Volume
53
Issue
8
Page
9483-9498
ISSN
eISSN
IST-REx-ID

Cite this

Nemcovsky Y, Zheltonozhskii E, Baskin C, Chmiel B, Bronstein AM, Mendelson A. Adversarial robustness via noise injection in smoothed models. Applied Intelligence. 2022;53(8):9483-9498. doi:10.1007/s10489-022-03423-5
Nemcovsky, Y., Zheltonozhskii, E., Baskin, C., Chmiel, B., Bronstein, A. M., & Mendelson, A. (2022). Adversarial robustness via noise injection in smoothed models. Applied Intelligence. Springer Nature. https://doi.org/10.1007/s10489-022-03423-5
Nemcovsky, Yaniv, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Alex M. Bronstein, and Avi Mendelson. “Adversarial Robustness via Noise Injection in Smoothed Models.” Applied Intelligence. Springer Nature, 2022. https://doi.org/10.1007/s10489-022-03423-5.
Y. Nemcovsky, E. Zheltonozhskii, C. Baskin, B. Chmiel, A. M. Bronstein, and A. Mendelson, “Adversarial robustness via noise injection in smoothed models,” Applied Intelligence, vol. 53, no. 8. Springer Nature, pp. 9483–9498, 2022.
Nemcovsky Y, Zheltonozhskii E, Baskin C, Chmiel B, Bronstein AM, Mendelson A. 2022. Adversarial robustness via noise injection in smoothed models. Applied Intelligence. 53(8), 9483–9498.
Nemcovsky, Yaniv, et al. “Adversarial Robustness via Noise Injection in Smoothed Models.” Applied Intelligence, vol. 53, no. 8, Springer Nature, 2022, pp. 9483–98, doi:10.1007/s10489-022-03423-5.

Export

Marked Publications

Open Data ISTA Research Explorer

Search this title in

Google Scholar