{"publisher":"ML Research Press","external_id":{"arxiv":["2012.11654"]},"date_created":"2023-06-18T22:00:48Z","month":"07","publication_status":"published","publication":"Proceedings of the 38th International Conference on Machine Learning","project":[{"name":"Prix Lopez-Loretta 2019 - Marco Mondelli","_id":"059876FA-7A3F-11EA-A408-12923DDC885E"}],"department":[{"_id":"MaMo"}],"oa_version":"Published Version","type":"conference","date_updated":"2023-06-19T10:52:51Z","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","file_date_updated":"2023-06-19T10:49:12Z","acknowledgement":"The authors would like to thank the anonymous reviewers for their helpful comments. MM was partially supported by the 2019 Lopez-Loreta Prize. QN and GM acknowledge support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no 757983).","file":[{"file_id":"13155","relation":"main_file","content_type":"application/pdf","file_name":"2021_PMLR_Nguyen.pdf","checksum":"19489cf5e16a0596b1f92e317d97c9b0","date_created":"2023-06-19T10:49:12Z","file_size":591332,"success":1,"date_updated":"2023-06-19T10:49:12Z","creator":"dernst","access_level":"open_access"}],"_id":"13146","status":"public","conference":{"end_date":"2021-07-24","location":"Virtual","name":"International Conference on Machine Learning","start_date":"2021-07-18"},"abstract":[{"text":"A recent line of work has analyzed the theoretical properties of deep neural networks via the Neural Tangent Kernel (NTK). In particular, the smallest eigenvalue of the NTK has been related to the memorization capacity, the global convergence of gradient descent algorithms and the generalization of deep nets. However, existing results either provide bounds in the two-layer setting or assume that the spectrum of the NTK matrices is bounded away from 0 for multi-layer networks. In this paper, we provide tight bounds on the smallest eigenvalue of NTK matrices for deep ReLU nets, both in the limiting case of infinite widths and for finite widths. In the finite-width setting, the network architectures we consider are fairly general: we require the existence of a wide layer with roughly order of N neurons, N being the number of data samples; and the scaling of the remaining layer widths is arbitrary (up to logarithmic factors). To obtain our results, we analyze various quantities of independent interest: we give lower bounds on the smallest singular value of hidden feature matrices, and upper bounds on the Lipschitz constant of input-output feature maps.","lang":"eng"}],"oa":1,"year":"2021","page":"8119-8129","author":[{"full_name":"Nguyen, Quynh","last_name":"Nguyen","first_name":"Quynh"},{"full_name":"Mondelli, Marco","id":"27EB676C-8706-11E9-9510-7717E6697425","first_name":"Marco","last_name":"Mondelli","orcid":"0000-0002-3242-7020"},{"full_name":"Montufar, Guido","first_name":"Guido","last_name":"Montufar"}],"day":"01","language":[{"iso":"eng"}],"has_accepted_license":"1","tmp":{"name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","short":"CC BY (4.0)","image":"/images/cc_by.png"},"intvolume":" 139","title":"Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks","citation":{"ista":"Nguyen Q, Mondelli M, Montufar G. 2021. Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks. Proceedings of the 38th International Conference on Machine Learning. International Conference on Machine Learning vol. 139, 8119–8129.","mla":"Nguyen, Quynh, et al. “Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks.” Proceedings of the 38th International Conference on Machine Learning, vol. 139, ML Research Press, 2021, pp. 8119–29.","apa":"Nguyen, Q., Mondelli, M., & Montufar, G. (2021). Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks. In Proceedings of the 38th International Conference on Machine Learning (Vol. 139, pp. 8119–8129). Virtual: ML Research Press.","ama":"Nguyen Q, Mondelli M, Montufar G. Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks. In: Proceedings of the 38th International Conference on Machine Learning. Vol 139. ML Research Press; 2021:8119-8129.","chicago":"Nguyen, Quynh, Marco Mondelli, and Guido Montufar. “Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks.” In Proceedings of the 38th International Conference on Machine Learning, 139:8119–29. ML Research Press, 2021.","short":"Q. Nguyen, M. Mondelli, G. Montufar, in:, Proceedings of the 38th International Conference on Machine Learning, ML Research Press, 2021, pp. 8119–8129.","ieee":"Q. Nguyen, M. Mondelli, and G. Montufar, “Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks,” in Proceedings of the 38th International Conference on Machine Learning, Virtual, 2021, vol. 139, pp. 8119–8129."},"article_processing_charge":"No","publication_identifier":{"eissn":["2640-3498"],"isbn":["9781713845065"]},"ddc":["000"],"scopus_import":"1","date_published":"2021-07-01T00:00:00Z","volume":139,"quality_controlled":"1"}