{"publication_status":"published","editor":[{"first_name":"Marina","last_name":"Meila","full_name":"Meila, Marina"},{"first_name":"Tong","full_name":"Zhang, Tong","last_name":"Zhang"}],"oa_version":"Published Version","oa":1,"acknowledgement":"The authors would like to thank the anonymous reviewers for their helpful comments. MM was partially supported\r\nby the 2019 Lopez-Loreta Prize. QN and GM acknowledge support from the European Research Council (ERC) under\r\nthe European Union’s Horizon 2020 research and innovation programme (grant agreement no 757983).","intvolume":" 139","date_updated":"2022-01-04T09:59:21Z","page":"8119-8129","volume":139,"date_created":"2022-01-03T10:57:49Z","department":[{"_id":"MaMo"}],"_id":"10595","title":"Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep ReLU networks","author":[{"last_name":"Nguyen","full_name":"Nguyen, Quynh","first_name":"Quynh"},{"orcid":"0000-0002-3242-7020","id":"27EB676C-8706-11E9-9510-7717E6697425","first_name":"Marco","last_name":"Mondelli","full_name":"Mondelli, Marco"},{"first_name":"Guido F","last_name":"Montufar","full_name":"Montufar, Guido F"}],"year":"2021","type":"conference","conference":{"start_date":"2021-07-18","end_date":"2021-07-24","name":"ICML: International Conference on Machine Learning","location":"Virtual"},"citation":{"apa":"Nguyen, Q., Mondelli, M., & Montufar, G. F. (2021). Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep ReLU networks. In M. Meila & T. Zhang (Eds.), Proceedings of the 38th International Conference on Machine Learning (Vol. 139, pp. 8119–8129). Virtual: ML Research Press.","short":"Q. Nguyen, M. Mondelli, G.F. Montufar, in:, M. Meila, T. Zhang (Eds.), Proceedings of the 38th International Conference on Machine Learning, ML Research Press, 2021, pp. 8119–8129.","ista":"Nguyen Q, Mondelli M, Montufar GF. 2021. Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep ReLU networks. Proceedings of the 38th International Conference on Machine Learning. ICML: International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 139, 8119–8129.","mla":"Nguyen, Quynh, et al. “Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks.” Proceedings of the 38th International Conference on Machine Learning, edited by Marina Meila and Tong Zhang, vol. 139, ML Research Press, 2021, pp. 8119–29.","chicago":"Nguyen, Quynh, Marco Mondelli, and Guido F Montufar. “Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks.” In Proceedings of the 38th International Conference on Machine Learning, edited by Marina Meila and Tong Zhang, 139:8119–29. ML Research Press, 2021.","ama":"Nguyen Q, Mondelli M, Montufar GF. Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep ReLU networks. In: Meila M, Zhang T, eds. Proceedings of the 38th International Conference on Machine Learning. Vol 139. ML Research Press; 2021:8119-8129.","ieee":"Q. Nguyen, M. Mondelli, and G. F. Montufar, “Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep ReLU networks,” in Proceedings of the 38th International Conference on Machine Learning, Virtual, 2021, vol. 139, pp. 8119–8129."},"user_id":"8b945eb4-e2f2-11eb-945a-df72226e66a9","article_processing_charge":"No","language":[{"iso":"eng"}],"publisher":"ML Research Press","abstract":[{"text":"A recent line of work has analyzed the theoretical properties of deep neural networks via the Neural Tangent Kernel (NTK). In particular, the smallest eigenvalue of the NTK has been related to the memorization capacity, the global convergence of gradient descent algorithms and the generalization of deep nets. However, existing results either provide bounds in the two-layer setting or assume that the spectrum of the NTK matrices is bounded away from 0 for multi-layer networks. In this paper, we provide tight bounds on the smallest eigenvalue of NTK matrices for deep ReLU nets, both in the limiting case of infinite widths and for finite widths. In the finite-width setting, the network architectures we consider are fairly general: we require the existence of a wide layer with roughly order of $N$ neurons, $N$ being the number of data samples; and the scaling of the remaining layer widths is arbitrary (up to logarithmic factors). To obtain our results, we analyze various quantities of independent interest: we give lower bounds on the smallest singular value of hidden feature matrices, and upper bounds on the Lipschitz constant of input-output feature maps.","lang":"eng"}],"alternative_title":["Proceedings of Machine Learning Research"],"publication":"Proceedings of the 38th International Conference on Machine Learning","external_id":{"arxiv":["2012.11654"]},"quality_controlled":"1","main_file_link":[{"url":"http://proceedings.mlr.press/v139/nguyen21g.html","open_access":"1"}],"project":[{"name":"Prix Lopez-Loretta 2019 - Marco Mondelli","_id":"059876FA-7A3F-11EA-A408-12923DDC885E"}],"date_published":"2021-01-01T00:00:00Z","status":"public"}