How to probe: Simple yet effective techniques for improving post-hoc explanations
Gairola S, Böhle M, Locatello F, Schiele B. 2025. How to probe: Simple yet effective techniques for improving post-hoc explanations. 13th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.
Download
Conference Paper
| Published
| English
Author
Corresponding author has ISTA affiliation
Department
Abstract
Post-hoc importance attribution methods are a popular tool for “explaining” Deep Neural Networks (DNNs) and are inherently based on the assumption that the explanations can be applied independently of how the models were trained. Contrarily, in this work we bring forward empirical evidence that challenges this very notion. Surprisingly, we discover a strong dependency on and demonstrate that the training details of a pre-trained model’s classification layer (<10% of model parameters) play a crucial role, much more than the pre-training scheme itself. This is of high practical relevance: (1) as techniques for pre-training models are becoming increasingly diverse, understanding the interplay between these techniques and attribution methods is critical; (2) it sheds light on an important yet overlooked assumption of post-hoc attribution methods which can drastically impact model explanations and how they are interpreted eventually. With this finding we also present simple yet effective adjustments to the classification layers, that can significantly enhance the quality of model explanations. We validate our findings across several visual pre-training frameworks (fully-supervised, self-supervised, contrastive vision-language training) and analyse how they impact explanations for a wide range of attribution methods on a diverse set of evaluation metrics.
Publishing Year
Date Published
2025-01-22
Proceedings Title
13th International Conference on Learning Representations
Publisher
ICLR
Acknowledgement
We sincerely thank Sukrut Rao and Yue Fan for their valuable feedback on the paper and insightful discussions throughout the project. Additionally, we appreciate Sukrut’s help
with some LATEX sorcery. This work was partially supported by ELSA Mobility Program1
as part of the ELLIS2 exchange program to the Institute of Science and Technology Austria (ISTA), where a portion of this research was conducted.
Conference
ICLR: International Conference on Learning Representations
Conference Location
Singapore
Conference Date
2025-04-24 – 2025-04-28
IST-REx-ID
Cite this
Gairola S, Böhle M, Locatello F, Schiele B. How to probe: Simple yet effective techniques for improving post-hoc explanations. In: 13th International Conference on Learning Representations. ICLR; 2025.
Gairola, S., Böhle, M., Locatello, F., & Schiele, B. (2025). How to probe: Simple yet effective techniques for improving post-hoc explanations. In 13th International Conference on Learning Representations. Singapore: ICLR.
Gairola, Siddhartha, Moritz Böhle, Francesco Locatello, and Bernt Schiele. “How to Probe: Simple yet Effective Techniques for Improving Post-Hoc Explanations.” In 13th International Conference on Learning Representations. ICLR, 2025.
S. Gairola, M. Böhle, F. Locatello, and B. Schiele, “How to probe: Simple yet effective techniques for improving post-hoc explanations,” in 13th International Conference on Learning Representations, Singapore, 2025.
Gairola S, Böhle M, Locatello F, Schiele B. 2025. How to probe: Simple yet effective techniques for improving post-hoc explanations. 13th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.
Gairola, Siddhartha, et al. “How to Probe: Simple yet Effective Techniques for Improving Post-Hoc Explanations.” 13th International Conference on Learning Representations, ICLR, 2025.
All files available under the following license(s):
Creative Commons Attribution 4.0 International Public License (CC-BY 4.0):
Main File(s)
File Name
2025_ICLR_Gairola.pdf
24.39 MB
Access Level
Open Access
Date Uploaded
2026-02-09
MD5 Checksum
6c8dfe4291c41d5a2c2fd838105e10b9
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 2503.00641
