Self-supervised learning with data augmentations provably isolates content from style

Kügelgen J von, Sharma Y, Gresele L, Brendel W, Schölkopf B, Besserve M, Locatello F. 2021. Self-supervised learning with data augmentations provably isolates content from style. Advances in Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems vol. 34, 16451–16467.

Download (ext.)
Conference Paper | Published | English
Author
Kügelgen, Julius von; Sharma, Yash; Gresele, Luigi; Brendel, Wieland; Schölkopf, Bernhard; Besserve, Michel; Locatello, FrancescoISTA
Department
Abstract
Self-supervised representation learning has shown remarkable success in a number of domains. A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant. We seek to understand the empirical success of this approach from a theoretical perspective. We formulate the augmentation process as a latent variable model by postulating a partition of the latent representation into a content component, which is assumed invariant to augmentation, and a style component, which is allowed to change. Unlike prior work on disentanglement and independent component analysis, we allow for both nontrivial statistical and causal dependencies in the latent space. We study the identifiability of the latent representation based on pairs of views of the observations and prove sufficient conditions that allow us to identify the invariant content partition up to an invertible mapping in both generative and discriminative settings. We find numerical simulations with dependent latent variables are consistent with our theory. Lastly, we introduce Causal3DIdent, a dataset of high-dimensional, visually complex images with rich causal dependencies, which we use to study the effect of data augmentations performed in practice.
Publishing Year
Date Published
2021-06-08
Proceedings Title
Advances in Neural Information Processing Systems
Volume
34
Page
16451-16467
Conference
NeurIPS: Neural Information Processing Systems
Conference Location
Virtual
Conference Date
2021-12-07 – 2021-12-10
IST-REx-ID

Cite this

Kügelgen J von, Sharma Y, Gresele L, et al. Self-supervised learning with data augmentations provably isolates content from style. In: Advances in Neural Information Processing Systems. Vol 34. ; 2021:16451-16467.
Kügelgen, J. von, Sharma, Y., Gresele, L., Brendel, W., Schölkopf, B., Besserve, M., & Locatello, F. (2021). Self-supervised learning with data augmentations provably isolates content from style. In Advances in Neural Information Processing Systems (Vol. 34, pp. 16451–16467). Virtual.
Kügelgen, Julius von, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, and Francesco Locatello. “Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style.” In Advances in Neural Information Processing Systems, 34:16451–67, 2021.
J. von Kügelgen et al., “Self-supervised learning with data augmentations provably isolates content from style,” in Advances in Neural Information Processing Systems, Virtual, 2021, vol. 34, pp. 16451–16467.
Kügelgen J von, Sharma Y, Gresele L, Brendel W, Schölkopf B, Besserve M, Locatello F. 2021. Self-supervised learning with data augmentations provably isolates content from style. Advances in Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems vol. 34, 16451–16467.
Kügelgen, Julius von, et al. “Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style.” Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 16451–67.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Sources

arXiv 2106.04619

Search this title in

Google Scholar
ISBN Search