{"oa":1,"publication_status":"published","extern":"1","author":[{"first_name":"Lukas","last_name":"Schott","full_name":"Schott, Lukas"},{"full_name":"Kügelgen, Julius von","last_name":"Kügelgen","first_name":"Julius von"},{"first_name":"Frederik","full_name":"Träuble, Frederik","last_name":"Träuble"},{"full_name":"Gehler, Peter","last_name":"Gehler","first_name":"Peter"},{"first_name":"Chris","last_name":"Russell","full_name":"Russell, Chris"},{"full_name":"Bethge, Matthias","last_name":"Bethge","first_name":"Matthias"},{"full_name":"Schölkopf, Bernhard","last_name":"Schölkopf","first_name":"Bernhard"},{"orcid":"0000-0002-4850-0683","first_name":"Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","full_name":"Locatello, Francesco","last_name":"Locatello"},{"first_name":"Wieland","last_name":"Brendel","full_name":"Brendel, Wieland"}],"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2107.08221"}],"department":[{"_id":"FrLo"}],"citation":{"short":"L. Schott, J. von Kügelgen, F. Träuble, P. Gehler, C. Russell, M. Bethge, B. Schölkopf, F. Locatello, W. Brendel, in:, 10th International Conference on Learning Representations, 2022.","ista":"Schott L, Kügelgen J von, Träuble F, Gehler P, Russell C, Bethge M, Schölkopf B, Locatello F, Brendel W. 2022. Visual representation learning does not generalize strongly within the same domain. 10th International Conference on Learning Representations. ICLR: International Conference on Learning Representations.","ama":"Schott L, Kügelgen J von, Träuble F, et al. Visual representation learning does not generalize strongly within the same domain. In: 10th International Conference on Learning Representations. ; 2022.","ieee":"L. Schott et al., “Visual representation learning does not generalize strongly within the same domain,” in 10th International Conference on Learning Representations, Virtual, 2022.","mla":"Schott, Lukas, et al. “Visual Representation Learning Does Not Generalize Strongly within the Same Domain.” 10th International Conference on Learning Representations, 2022.","apa":"Schott, L., Kügelgen, J. von, Träuble, F., Gehler, P., Russell, C., Bethge, M., … Brendel, W. (2022). Visual representation learning does not generalize strongly within the same domain. In 10th International Conference on Learning Representations. Virtual.","chicago":"Schott, Lukas, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, and Wieland Brendel. “Visual Representation Learning Does Not Generalize Strongly within the Same Domain.” In 10th International Conference on Learning Representations, 2022."},"title":"Visual representation learning does not generalize strongly within the same domain","article_processing_charge":"No","year":"2022","date_created":"2023-08-22T14:00:50Z","_id":"14172","abstract":[{"text":"An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world. In this paper, we test whether 17 unsupervised, weakly supervised, and fully supervised representation learning approaches correctly infer the generative factors of variation in simple datasets (dSprites, Shapes3D, MPI3D) from controlled environments, and on our contributed CelebGlow dataset. In contrast to prior robustness work that introduces novel factors of variation during test time, such as blur or other (un)structured noise, we here recompose, interpolate, or extrapolate only existing factors of variation from the training data set (e.g., small and medium-sized objects during training and large objects during testing). Models\r\nthat learn the correct mechanism should be able to generalize to this benchmark. In total, we train and test 2000+ models and observe that all of them struggle to learn the underlying mechanism regardless of supervision signal and architectural bias. Moreover, the generalization capabilities of all tested models drop significantly as we move from artificial datasets towards\r\nmore realistic real-world datasets. Despite their inability to identify the correct mechanism, the models are quite modular as their ability to infer other in-distribution factors remains fairly stable, providing only a single factoris out-of-distribution. These results point to an important yet understudied problem of learning mechanistic models of observations that can facilitate\r\ngeneralization.","lang":"eng"}],"date_published":"2022-04-25T00:00:00Z","language":[{"iso":"eng"}],"date_updated":"2023-09-11T09:40:52Z","publication":"10th International Conference on Learning Representations","type":"conference","month":"04","status":"public","quality_controlled":"1","conference":{"start_date":"2022-04-25","end_date":"2022-04-29","location":"Virtual","name":"ICLR: International Conference on Learning Representations"},"day":"25","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","oa_version":"Preprint","external_id":{"arxiv":["2107.08221"]}}