{"oa":1,"date_created":"2024-09-22T22:01:46Z","month":"09","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","title":"SPADE: Sparsity-guided debugging for deep neural networks","page":"45955-45987","article_processing_charge":"No","main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2310.04519"}],"department":[{"_id":"DaAl"}],"publication_status":"published","alternative_title":["PMLR"],"quality_controlled":"1","intvolume":" 235","acknowledgement":"The authors would like to thank Stephen Casper and Tony Wang for their feedback on this work, and Eldar Kurtic for his advice on aspects of the project. This research was supported by the Scientific Service Units (SSU) of IST Austria through resources provided by Scientific Computing (SciComp). EI was supported in part by the FWF DK VGSCO, grant agreement number W1260-N35.","author":[{"first_name":"Arshia Soltani","last_name":"Moakhar","full_name":"Moakhar, Arshia Soltani"},{"orcid":"0000-0002-7778-3221","id":"f9a17499-f6e0-11ea-865d-fdf9a3f77117","first_name":"Eugenia B","full_name":"Iofinova, Eugenia B","last_name":"Iofinova"},{"id":"09a8f98d-ec99-11ea-ae11-c063a7b7fe5f","first_name":"Elias","last_name":"Frantar","full_name":"Frantar, Elias"},{"id":"4A899BFC-F248-11E8-B48F-1D18A9856A87","orcid":"0000-0003-3650-940X","first_name":"Dan-Adrian","full_name":"Alistarh, Dan-Adrian","last_name":"Alistarh"}],"external_id":{"arxiv":["2310.04519"]},"_id":"18121","publication_identifier":{"eissn":["2640-3498"]},"conference":{"name":"ICML: International Conference on Machine Learning","end_date":"2024-07-27","location":"Vienna, Austria","start_date":"2024-07-21"},"scopus_import":"1","acknowledged_ssus":[{"_id":"ScienComp"}],"related_material":{"link":[{"url":"https://github.com/IST-DASLab/SPADE","relation":"software"}]},"publication":"Proceedings of the 41st International Conference on Machine Learning","date_updated":"2024-10-01T09:47:10Z","corr_author":"1","publisher":"ML Research Press","status":"public","day":"01","year":"2024","volume":235,"date_published":"2024-09-01T00:00:00Z","oa_version":"Preprint","project":[{"name":"Vienna Graduate School on Computational Optimization","grant_number":" W1260-N35","_id":"9B9290DE-BA93-11EA-9121-9846C619BF3A"}],"citation":{"apa":"Moakhar, A. S., Iofinova, E. B., Frantar, E., & Alistarh, D.-A. (2024). SPADE: Sparsity-guided debugging for deep neural networks. In Proceedings of the 41st International Conference on Machine Learning (Vol. 235, pp. 45955–45987). Vienna, Austria: ML Research Press.","chicago":"Moakhar, Arshia Soltani, Eugenia B Iofinova, Elias Frantar, and Dan-Adrian Alistarh. “SPADE: Sparsity-Guided Debugging for Deep Neural Networks.” In Proceedings of the 41st International Conference on Machine Learning, 235:45955–87. ML Research Press, 2024.","ista":"Moakhar AS, Iofinova EB, Frantar E, Alistarh D-A. 2024. SPADE: Sparsity-guided debugging for deep neural networks. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 45955–45987.","ieee":"A. S. Moakhar, E. B. Iofinova, E. Frantar, and D.-A. Alistarh, “SPADE: Sparsity-guided debugging for deep neural networks,” in Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 2024, vol. 235, pp. 45955–45987.","ama":"Moakhar AS, Iofinova EB, Frantar E, Alistarh D-A. SPADE: Sparsity-guided debugging for deep neural networks. In: Proceedings of the 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:45955-45987.","mla":"Moakhar, Arshia Soltani, et al. “SPADE: Sparsity-Guided Debugging for Deep Neural Networks.” Proceedings of the 41st International Conference on Machine Learning, vol. 235, ML Research Press, 2024, pp. 45955–87.","short":"A.S. Moakhar, E.B. Iofinova, E. Frantar, D.-A. Alistarh, in:, Proceedings of the 41st International Conference on Machine Learning, ML Research Press, 2024, pp. 45955–45987."},"abstract":[{"lang":"eng","text":"It is known that sparsity can improve interpretability for deep neural networks. However, existing methods in the area either require networks that are pre-trained with sparsity constraints, or impose sparsity after the fact, altering the network’s general behavior. In this paper, we demonstrate, for the first time, that sparsity can instead be incorporated into the interpretation process itself, as a sample-specific preprocessing step. Unlike previous work, this approach, which we call SPADE, does not place constraints on the trained model and does not affect its behavior during inference on the sample. Given a trained model and a target sample, SPADE uses sample-targeted pruning to provide a \"trace\" of the network’s execution on the sample, reducing the network to the most important connections prior to computing an interpretation. We demonstrate that preprocessing with SPADE significantly increases the accuracy of image saliency maps across several interpretability methods. Additionally, SPADE improves the usefulness of neuron visualizations, aiding humans in reasoning about network behavior. Our code is available at https://github.com/IST-DASLab/SPADE."}],"type":"conference","language":[{"iso":"eng"}]}