Representation learning for out-of-distribution generalization in reinforcement learning

Träuble F, Dittadi A, Wuthrich M, Widmaier F, Gehler PV, Winther O, Locatello F, Bachem O, Schölkopf B, Bauer S. 2021. Representation learning for out-of-distribution generalization in reinforcement learning. ICML 2021 Workshop on Unsupervised Reinforcement Learning. ICML: International Conference on Machine Learning.

Download
No fulltext has been uploaded. References only!
Conference Paper | Published | English
Author
Träuble, Frederik; Dittadi, Andrea; Wuthrich, Manuel; Widmaier, Felix; Gehler, Peter Vincent; Winther, Ole; Locatello, FrancescoISTA ; Bachem, Olivier; Schölkopf, Bernhard; Bauer, Stefan
Department
Abstract
Learning data representations that are useful for various downstream tasks is a cornerstone of artificial intelligence. While existing methods are typically evaluated on downstream tasks such as classification or generative image quality, we propose to assess representations through their usefulness in downstream control tasks, such as reaching or pushing objects. By training over 10,000 reinforcement learning policies, we extensively evaluate to what extent different representation properties affect out-of-distribution (OOD) generalization. Finally, we demonstrate zero-shot transfer of these policies from simulation to the real world, without any domain randomization or fine-tuning. This paper aims to establish the first systematic characterization of the usefulness of learned representations for real-world OOD downstream tasks.
Publishing Year
Date Published
2021-07-23
Proceedings Title
ICML 2021 Workshop on Unsupervised Reinforcement Learning
Conference
ICML: International Conference on Machine Learning
Conference Location
Virtual
Conference Date
2021-07-23 – 2021-07-23
IST-REx-ID

Cite this

Träuble F, Dittadi A, Wuthrich M, et al. Representation learning for out-of-distribution generalization in reinforcement learning. In: ICML 2021 Workshop on Unsupervised Reinforcement Learning. ; 2021.
Träuble, F., Dittadi, A., Wuthrich, M., Widmaier, F., Gehler, P. V., Winther, O., … Bauer, S. (2021). Representation learning for out-of-distribution generalization in reinforcement learning. In ICML 2021 Workshop on Unsupervised Reinforcement Learning. Virtual.
Träuble, Frederik, Andrea Dittadi, Manuel Wuthrich, Felix Widmaier, Peter Vincent Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, and Stefan Bauer. “Representation Learning for Out-of-Distribution Generalization in Reinforcement Learning.” In ICML 2021 Workshop on Unsupervised Reinforcement Learning, 2021.
F. Träuble et al., “Representation learning for out-of-distribution generalization in reinforcement learning,” in ICML 2021 Workshop on Unsupervised Reinforcement Learning, Virtual, 2021.
Träuble F, Dittadi A, Wuthrich M, Widmaier F, Gehler PV, Winther O, Locatello F, Bachem O, Schölkopf B, Bauer S. 2021. Representation learning for out-of-distribution generalization in reinforcement learning. ICML 2021 Workshop on Unsupervised Reinforcement Learning. ICML: International Conference on Machine Learning.
Träuble, Frederik, et al. “Representation Learning for Out-of-Distribution Generalization in Reinforcement Learning.” ICML 2021 Workshop on Unsupervised Reinforcement Learning, 2021.

Export

Marked Publications

Open Data ISTA Research Explorer

Search this title in

Google Scholar