Privacy for free in the overparameterized regime
Bombari S, Mondelli M. 2025. Privacy for free in the overparameterized regime. Proceedings of the National Academy of Sciences. 122(15), e2423072122.
Download
Journal Article
| Published
| English
Scopus indexed
Corresponding author has ISTA affiliation
Department
Grant
Abstract
Differentially private gradient descent (DP-GD) is a popular algorithm to train deep learning models with provable guarantees on the privacy of the training data. In the last decade, the problem of understanding its performance cost with respect to standard GD has received remarkable attention from the research community, which formally derived upper bounds on the excess population risk RP in different learning settings. However, existing bounds typically degrade with over-parameterization, i.e., as the number of parameters p gets larger than the number of training samples n -- a regime which is ubiquitous in current deep-learning practice. As a result, the lack of theoretical insights leaves practitioners without clear guidance, leading some to reduce the effective number of trainable parameters to improve performance, while others use larger models to achieve better results through scale. In this work, we show that in the popular random features model with quadratic loss, for any sufficiently large p , privacy can be obtained for free, i.e., |RP|=o(1) , not only when the privacy parameter ε has constant order, but also in the strongly private setting ε=o(1) . This challenges the common wisdom that over-parameterization inherently hinders performance in private learning.
Publishing Year
Date Published
2025-04-15
Journal Title
Proceedings of the National Academy of Sciences
Publisher
National Academy of Sciences
Acknowledgement
This research was funded in whole, or in part, by the Austrian Science Fund (FWF) Grant number COE 12. For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. The authors were also supported by the 2019 Lopez-Loreta prize, and Simone Bombari was supported by a Google PhD fellowship. We thank Diyuan Wu, Edwige Cyffers, Francesco Pedrotti, Inbar Seroussi, Nikita P. Kalinin, Pietro Pelliconi, Roodabeh Safavi, Yizhe Zhu, and Zhichao Wang for helpful discussions.
Volume
122
Issue
15
Article Number
e2423072122
ISSN
eISSN
IST-REx-ID
Cite this
Bombari S, Mondelli M. Privacy for free in the overparameterized regime. Proceedings of the National Academy of Sciences. 2025;122(15). doi:10.1073/pnas.2423072122
Bombari, S., & Mondelli, M. (2025). Privacy for free in the overparameterized regime. Proceedings of the National Academy of Sciences. National Academy of Sciences. https://doi.org/10.1073/pnas.2423072122
Bombari, Simone, and Marco Mondelli. “Privacy for Free in the Overparameterized Regime.” Proceedings of the National Academy of Sciences. National Academy of Sciences, 2025. https://doi.org/10.1073/pnas.2423072122.
S. Bombari and M. Mondelli, “Privacy for free in the overparameterized regime,” Proceedings of the National Academy of Sciences, vol. 122, no. 15. National Academy of Sciences, 2025.
Bombari S, Mondelli M. 2025. Privacy for free in the overparameterized regime. Proceedings of the National Academy of Sciences. 122(15), e2423072122.
Bombari, Simone, and Marco Mondelli. “Privacy for Free in the Overparameterized Regime.” Proceedings of the National Academy of Sciences, vol. 122, no. 15, e2423072122, National Academy of Sciences, 2025, doi:10.1073/pnas.2423072122.
All files available under the following license(s):
Creative Commons Attribution 4.0 International Public License (CC-BY 4.0):
Main File(s)
File Name
2025_PNAS_Bombari.pdf
2.33 MB
Access Level

Date Uploaded
2025-05-05
MD5 Checksum
1ac6f78e368d35a0cafb4d2d9bd63443
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
PMID: 40215275
PubMed | Europe PMC
arXiv 2410.14787