{"language":[{"iso":"eng"}],"article_processing_charge":"No","_id":"14173","conference":{"location":"New Orleans, LA, United States","start_date":"2022-11-28","name":"NeurIPS: Neural Information Processing Systems","end_date":"2022-12-09"},"publication_identifier":{"isbn":["9781713871088"]},"date_published":"2022-12-15T00:00:00Z","oa":1,"date_created":"2023-08-22T14:01:13Z","type":"conference","publisher":"Neural Information Processing Systems Foundation","oa_version":"Preprint","date_updated":"2023-09-06T10:34:43Z","volume":35,"intvolume":" 35","page":"7181-7198","publication_status":"published","main_file_link":[{"open_access":"1","url":"https://arxiv.org/abs/2207.09239"}],"title":"Assaying out-of-distribution generalization in transfer learning","status":"public","author":[{"last_name":"Wenzel","full_name":"Wenzel, Florian","first_name":"Florian"},{"last_name":"Dittadi","first_name":"Andrea","full_name":"Dittadi, Andrea"},{"last_name":"Gehler","first_name":"Peter Vincent","full_name":"Gehler, Peter Vincent"},{"full_name":"Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel","first_name":"Carl-Johann Simon-Gabriel","last_name":"Carl-Johann Simon-Gabriel"},{"last_name":"Horn","first_name":"Max","full_name":"Horn, Max"},{"full_name":"Zietlow, Dominik","first_name":"Dominik","last_name":"Zietlow"},{"first_name":"David","full_name":"Kernert, David","last_name":"Kernert"},{"last_name":"Russell","first_name":"Chris","full_name":"Russell, Chris"},{"last_name":"Brox","full_name":"Brox, Thomas","first_name":"Thomas"},{"full_name":"Schiele, Bernt","first_name":"Bernt","last_name":"Schiele"},{"first_name":"Bernhard","full_name":"Schölkopf, Bernhard","last_name":"Schölkopf"},{"last_name":"Locatello","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","orcid":"0000-0002-4850-0683","full_name":"Locatello, Francesco","first_name":"Francesco"}],"month":"12","year":"2022","extern":"1","alternative_title":["Advances in Neural Information Processing Systems"],"quality_controlled":"1","citation":{"ista":"Wenzel F, Dittadi A, Gehler PV, Carl-Johann Simon-Gabriel C-JS-G, Horn M, Zietlow D, Kernert D, Russell C, Brox T, Schiele B, Schölkopf B, Locatello F. 2022. Assaying out-of-distribution generalization in transfer learning. 36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems, Advances in Neural Information Processing Systems, vol. 35, 7181–7198.","ieee":"F. Wenzel et al., “Assaying out-of-distribution generalization in transfer learning,” in 36th Conference on Neural Information Processing Systems, New Orleans, LA, United States, 2022, vol. 35, pp. 7181–7198.","chicago":"Wenzel, Florian, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, et al. “Assaying Out-of-Distribution Generalization in Transfer Learning.” In 36th Conference on Neural Information Processing Systems, 35:7181–98. Neural Information Processing Systems Foundation, 2022.","ama":"Wenzel F, Dittadi A, Gehler PV, et al. Assaying out-of-distribution generalization in transfer learning. In: 36th Conference on Neural Information Processing Systems. Vol 35. Neural Information Processing Systems Foundation; 2022:7181-7198.","short":"F. Wenzel, A. Dittadi, P.V. Gehler, C.-J.S.-G. Carl-Johann Simon-Gabriel, M. Horn, D. Zietlow, D. Kernert, C. Russell, T. Brox, B. Schiele, B. Schölkopf, F. Locatello, in:, 36th Conference on Neural Information Processing Systems, Neural Information Processing Systems Foundation, 2022, pp. 7181–7198.","apa":"Wenzel, F., Dittadi, A., Gehler, P. V., Carl-Johann Simon-Gabriel, C.-J. S.-G., Horn, M., Zietlow, D., … Locatello, F. (2022). Assaying out-of-distribution generalization in transfer learning. In 36th Conference on Neural Information Processing Systems (Vol. 35, pp. 7181–7198). New Orleans, LA, United States: Neural Information Processing Systems Foundation.","mla":"Wenzel, Florian, et al. “Assaying Out-of-Distribution Generalization in Transfer Learning.” 36th Conference on Neural Information Processing Systems, vol. 35, Neural Information Processing Systems Foundation, 2022, pp. 7181–98."},"scopus_import":"1","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","publication":"36th Conference on Neural Information Processing Systems","day":"15","external_id":{"arxiv":["2207.09239"]},"department":[{"_id":"FrLo"}],"abstract":[{"text":"Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e.g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations. While sharing the same aspirational goal, these approaches have never been tested under the same\r\nexperimental conditions on real data. In this paper, we take a unified view of previous work, highlighting message discrepancies that we address empirically, and providing recommendations on how to measure the robustness of a model and how to improve it. To this end, we collect 172 publicly available dataset pairs for training and out-of-distribution evaluation of accuracy, calibration error, adversarial attacks, environment invariance, and synthetic corruptions. We fine-tune over 31k networks, from nine different architectures in the many- and\r\nfew-shot setting. Our findings confirm that in- and out-of-distribution accuracies tend to increase jointly, but show that their relation is largely dataset-dependent, and in general more nuanced and more complex than posited by previous, smaller scale studies.","lang":"eng"}]}