--- res: bibo_abstract: - Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems. However, regularization is generally achieved by indirect means, largely due to the complex set of functions defined by a network and the difficulty in measuring function complexity. There exists no method in the literature for additive regularization based on a norm of the function, as is classically considered in statistical learning theory. In this work, we study the tractability of function norms for deep neural networks with ReLU activations. We provide, to the best of our knowledge, the first proof in the literature of the NP-hardness of computing function norms of DNNs of 3 or more layers. We also highlight a fundamental difference between shallow and deep networks. In the light on these results, we propose a new regularization strategy based on approximate function norms, and show its efficiency on a segmentation task with a DNN.@eng bibo_authorlist: - foaf_Person: foaf_givenName: Amal foaf_name: Rannen-Triki, Amal foaf_surname: Rannen-Triki - foaf_Person: foaf_givenName: Maxim foaf_name: Berman, Maxim foaf_surname: Berman - foaf_Person: foaf_givenName: Vladimir foaf_name: Kolmogorov, Vladimir foaf_surname: Kolmogorov foaf_workInfoHomepage: http://www.librecat.org/personId=3D50B0BA-F248-11E8-B48F-1D18A9856A87 - foaf_Person: foaf_givenName: Matthew B. foaf_name: Blaschko, Matthew B. foaf_surname: Blaschko bibo_doi: 10.1109/ICCVW.2019.00097 dct_date: 2019^xs_gYear dct_identifier: - UT:000554591600090 dct_isPartOf: - http://id.crossref.org/issn/9781728150239 dct_language: eng dct_publisher: IEEE@ dct_title: Function norms for neural networks@ ...