conference paper
Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks
published
yes
Quynh
Nguyen
author
Marco
Mondelli
author 27EB676C-8706-11E9-9510-7717E66974250000-0002-3242-7020
Guido
Montufar
author
MaMo
department
International Conference on Machine Learning
Prix Lopez-Loretta 2019 - Marco Mondelli
project
A recent line of work has analyzed the theoretical properties of deep neural networks via the Neural Tangent Kernel (NTK). In particular, the smallest eigenvalue of the NTK has been related to the memorization capacity, the global convergence of gradient descent algorithms and the generalization of deep nets. However, existing results either provide bounds in the two-layer setting or assume that the spectrum of the NTK matrices is bounded away from 0 for multi-layer networks. In this paper, we provide tight bounds on the smallest eigenvalue of NTK matrices for deep ReLU nets, both in the limiting case of infinite widths and for finite widths. In the finite-width setting, the network architectures we consider are fairly general: we require the existence of a wide layer with roughly order of N neurons, N being the number of data samples; and the scaling of the remaining layer widths is arbitrary (up to logarithmic factors). To obtain our results, we analyze various quantities of independent interest: we give lower bounds on the smallest singular value of hidden feature matrices, and upper bounds on the Lipschitz constant of input-output feature maps.
https://research-explorer.ista.ac.at/download/13146/13155/2021_PMLR_Nguyen.pdf
application/pdfno
https://creativecommons.org/licenses/by/4.0/
ML Research Press2021Virtual
eng
Proceedings of the 38th International Conference on Machine Learning
2640-3498
9781713845065
2012.11654
1398119-8129
Q. Nguyen, M. Mondelli, and G. Montufar, “Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks,” in <i>Proceedings of the 38th International Conference on Machine Learning</i>, Virtual, 2021, vol. 139, pp. 8119–8129.
Nguyen Q, Mondelli M, Montufar G. Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks. In: <i>Proceedings of the 38th International Conference on Machine Learning</i>. Vol 139. ML Research Press; 2021:8119-8129.
Nguyen, Quynh, et al. “Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks.” <i>Proceedings of the 38th International Conference on Machine Learning</i>, vol. 139, ML Research Press, 2021, pp. 8119–29.
Nguyen, Q., Mondelli, M., & Montufar, G. (2021). Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks. In <i>Proceedings of the 38th International Conference on Machine Learning</i> (Vol. 139, pp. 8119–8129). Virtual: ML Research Press.
Nguyen Q, Mondelli M, Montufar G. 2021. Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks. Proceedings of the 38th International Conference on Machine Learning. International Conference on Machine Learning vol. 139, 8119–8129.
Q. Nguyen, M. Mondelli, G. Montufar, in:, Proceedings of the 38th International Conference on Machine Learning, ML Research Press, 2021, pp. 8119–8129.
Nguyen, Quynh, Marco Mondelli, and Guido Montufar. “Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks.” In <i>Proceedings of the 38th International Conference on Machine Learning</i>, 139:8119–29. ML Research Press, 2021.
131462023-06-18T22:00:48Z2023-06-19T10:52:51Z