How spurious features are memorized: Precise analysis for random and NTK features
Bombari S, Mondelli M. 2024. How spurious features are memorized: Precise analysis for random and NTK features. 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 4267–4299.
Download (ext.)
Conference Paper
| Published
| English
Scopus indexed
Corresponding author has ISTA affiliation
Department
Series Title
PMLR
Abstract
Deep learning models are known to overfit and memorize spurious features in the training dataset. While numerous empirical studies have aimed at understanding this phenomenon, a rigorous theoretical framework to quantify it is still missing. In this paper, we consider spurious features that are uncorrelated with the learning task, and we provide a precise characterization of how they are memorized via two separate terms: (i) the stability of the model with respect to individual training samples, and (ii) the feature alignment between the spurious pattern and the full sample. While the first term is well established in learning theory and it is connected to the generalization error in classical work, the second one is, to the best of our knowledge, novel. Our key technical result gives a precise characterization of the feature alignment for the two prototypical settings of random features (RF) and neural tangent kernel (NTK) regression. We prove that the memorization of spurious features weakens as the generalization capability increases and, through the analysis of the feature alignment, we unveil the role of the model and of its activation function. Numerical experiments show the predictive power of our theory on standard datasets (MNIST, CIFAR-10).
Publishing Year
Date Published
2024-07-30
Proceedings Title
41st International Conference on Machine Learning
Publisher
ML Research Press
Acknowledgement
The authors were partially supported by the 2019 LopezLoreta prize, and they would like to thank (in alphabetical order) Grigorios Chrysos, Simone Maria Giancola, Mahyar
Jafari Nodeh, Christoph Lampert, Marco Miani, GuanWen Qiu, and Peter Sukenık for helpful discussions.
Volume
235
Page
4267-4299
Conference
ICML: International Conference on Machine Learning
Conference Location
Vienna, Austria
Conference Date
2024-07-21 – 2024-07-27
eISSN
IST-REx-ID
Cite this
Bombari S, Mondelli M. How spurious features are memorized: Precise analysis for random and NTK features. In: 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:4267-4299.
Bombari, S., & Mondelli, M. (2024). How spurious features are memorized: Precise analysis for random and NTK features. In 41st International Conference on Machine Learning (Vol. 235, pp. 4267–4299). Vienna, Austria: ML Research Press.
Bombari, Simone, and Marco Mondelli. “How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features.” In 41st International Conference on Machine Learning, 235:4267–99. ML Research Press, 2024.
S. Bombari and M. Mondelli, “How spurious features are memorized: Precise analysis for random and NTK features,” in 41st International Conference on Machine Learning, Vienna, Austria, 2024, vol. 235, pp. 4267–4299.
Bombari S, Mondelli M. 2024. How spurious features are memorized: Precise analysis for random and NTK features. 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 4267–4299.
Bombari, Simone, and Marco Mondelli. “How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features.” 41st International Conference on Machine Learning, vol. 235, ML Research Press, 2024, pp. 4267–99.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Link(s) to Main File(s)
Access Level

Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 2305.12100