Sharp asymptotics on the compression of two-layer neural networks

Amani MH, Bombari S, Mondelli M, Pukdee R, Rini S. 2022. Sharp asymptotics on the compression of two-layer neural networks. IEEE Information Theory Workshop., 588–593.

Download (ext.)

Journal Article | Published | English

Scopus indexed
Author
Amani, Mohammad Hossein; Bombari, SimoneISTA; Mondelli, MarcoISTA ; Pukdee, Rattana; Rini, Stefano
Department
Abstract
In this paper, we study the compression of a target two-layer neural network with N nodes into a compressed network with M<N nodes. More precisely, we consider the setting in which the weights of the target network are i.i.d. sub-Gaussian, and we minimize the population L_2 loss between the outputs of the target and of the compressed network, under the assumption of Gaussian inputs. By using tools from high-dimensional probability, we show that this non-convex problem can be simplified when the target network is sufficiently over-parameterized, and provide the error rate of this approximation as a function of the input dimension and N. In this mean-field limit, the simplified objective, as well as the optimal weights of the compressed network, does not depend on the realization of the target network, but only on expected scaling factors. Furthermore, for networks with ReLU activation, we conjecture that the optimum of the simplified optimization problem is achieved by taking weights on the Equiangular Tight Frame (ETF), while the scaling of the weights and the orientation of the ETF depend on the parameters of the target network. Numerical evidence is provided to support this conjecture.
Publishing Year
Date Published
2022-11-16
Journal Title
IEEE Information Theory Workshop
Page
588-593
Conference
ITW: Information Theory Workshop
Conference Location
Mumbai, India
Conference Date
2022-11-01 – 2022-11-09
IST-REx-ID

Cite this

Amani MH, Bombari S, Mondelli M, Pukdee R, Rini S. Sharp asymptotics on the compression of two-layer neural networks. IEEE Information Theory Workshop. 2022:588-593. doi:10.1109/ITW54588.2022.9965870
Amani, M. H., Bombari, S., Mondelli, M., Pukdee, R., & Rini, S. (2022). Sharp asymptotics on the compression of two-layer neural networks. IEEE Information Theory Workshop. Mumbai, India: IEEE. https://doi.org/10.1109/ITW54588.2022.9965870
Amani, Mohammad Hossein, Simone Bombari, Marco Mondelli, Rattana Pukdee, and Stefano Rini. “Sharp Asymptotics on the Compression of Two-Layer Neural Networks.” IEEE Information Theory Workshop. IEEE, 2022. https://doi.org/10.1109/ITW54588.2022.9965870.
M. H. Amani, S. Bombari, M. Mondelli, R. Pukdee, and S. Rini, “Sharp asymptotics on the compression of two-layer neural networks,” IEEE Information Theory Workshop. IEEE, pp. 588–593, 2022.
Amani MH, Bombari S, Mondelli M, Pukdee R, Rini S. 2022. Sharp asymptotics on the compression of two-layer neural networks. IEEE Information Theory Workshop., 588–593.
Amani, Mohammad Hossein, et al. “Sharp Asymptotics on the Compression of Two-Layer Neural Networks.” IEEE Information Theory Workshop, IEEE, 2022, pp. 588–93, doi:10.1109/ITW54588.2022.9965870.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Sources

arXiv 2205.08199

Search this title in

Google Scholar
ISBN Search