---
res:
  bibo_abstract:
  - 'A recent line of work has analyzed the theoretical properties of deep neural
    networks via the Neural Tangent Kernel (NTK). In particular, the smallest eigenvalue
    of the NTK has been related to the memorization capacity, the global convergence
    of gradient descent algorithms and the generalization of deep nets. However, existing
    results either provide bounds in the two-layer setting or assume that the spectrum
    of the NTK matrices is bounded away from 0 for multi-layer networks. In this paper,
    we provide tight bounds on the smallest eigenvalue of NTK matrices for deep ReLU
    nets, both in the limiting case of infinite widths and for finite widths. In the
    finite-width setting, the network architectures we consider are fairly general:
    we require the existence of a wide layer with roughly order of N neurons, N being
    the number of data samples; and the scaling of the remaining layer widths is arbitrary
    (up to logarithmic factors). To obtain our results, we analyze various quantities
    of independent interest: we give lower bounds on the smallest singular value of
    hidden feature matrices, and upper bounds on the Lipschitz constant of input-output
    feature maps.@eng'
  bibo_authorlist:
  - foaf_Person:
      foaf_givenName: Quynh
      foaf_name: Nguyen, Quynh
      foaf_surname: Nguyen
  - foaf_Person:
      foaf_givenName: Marco
      foaf_name: Mondelli, Marco
      foaf_surname: Mondelli
      foaf_workInfoHomepage: http://www.librecat.org/personId=27EB676C-8706-11E9-9510-7717E6697425
    orcid: 0000-0002-3242-7020
  - foaf_Person:
      foaf_givenName: Guido
      foaf_name: Montufar, Guido
      foaf_surname: Montufar
  bibo_volume: 139
  dct_date: 2021^xs_gYear
  dct_isPartOf:
  - http://id.crossref.org/issn/2640-3498
  - http://id.crossref.org/issn/9781713845065
  dct_language: eng
  dct_publisher: ML Research Press@
  dct_title: Tight bounds on the smallest Eigenvalue of the neural tangent kernel
    for deep ReLU networks@
...
