{"status":"public","department":[{"_id":"FrLo"}],"month":"04","date_updated":"2023-09-13T12:23:03Z","article_number":"1804.11130","publication_status":"submitted","day":"30","author":[{"id":"26cfd52f-2483-11ee-8040-88983bcc06d4","first_name":"Francesco","orcid":"0000-0002-4850-0683","last_name":"Locatello","full_name":"Locatello, Francesco"},{"full_name":"Vincent, Damien","last_name":"Vincent","first_name":"Damien"},{"last_name":"Tolstikhin","full_name":"Tolstikhin, Ilya","first_name":"Ilya"},{"first_name":"Gunnar","full_name":"Rätsch, Gunnar","last_name":"Rätsch"},{"first_name":"Sylvain","last_name":"Gelly","full_name":"Gelly, Sylvain"},{"first_name":"Bernhard","full_name":"Schölkopf, Bernhard","last_name":"Schölkopf"}],"language":[{"iso":"eng"}],"year":"2018","external_id":{"arxiv":["1804.11130"]},"date_created":"2023-09-13T12:20:49Z","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","oa":1,"publication":"arXiv","oa_version":"Preprint","doi":"10.48550/arXiv.1804.11130","abstract":[{"lang":"eng","text":"A common assumption in causal modeling posits that the data is generated by a\r\nset of independent mechanisms, and algorithms should aim to recover this\r\nstructure. Standard unsupervised learning, however, is often concerned with\r\ntraining a single model to capture the overall distribution or aspects thereof.\r\nInspired by clustering approaches, we consider mixtures of implicit generative\r\nmodels that ``disentangle'' the independent generative mechanisms underlying\r\nthe data. Relying on an additional set of discriminators, we propose a\r\ncompetitive training procedure in which the models only need to capture the\r\nportion of the data distribution from which they can produce realistic samples.\r\nAs a by-product, each model is simpler and faster to train. We empirically show\r\nthat our approach splits the training distribution in a sensible way and\r\nincreases the quality of the generated samples."}],"main_file_link":[{"url":"https://doi.org/10.48550/arXiv.1804.11130","open_access":"1"}],"_id":"14327","citation":{"ieee":"F. Locatello, D. Vincent, I. Tolstikhin, G. Rätsch, S. Gelly, and B. Schölkopf, “Competitive training of mixtures of independent deep generative models,” arXiv. .","mla":"Locatello, Francesco, et al. “Competitive Training of Mixtures of Independent Deep Generative Models.” ArXiv, 1804.11130, doi:10.48550/arXiv.1804.11130.","ista":"Locatello F, Vincent D, Tolstikhin I, Rätsch G, Gelly S, Schölkopf B. Competitive training of mixtures of independent deep generative models. arXiv, 1804.11130.","ama":"Locatello F, Vincent D, Tolstikhin I, Rätsch G, Gelly S, Schölkopf B. Competitive training of mixtures of independent deep generative models. arXiv. doi:10.48550/arXiv.1804.11130","apa":"Locatello, F., Vincent, D., Tolstikhin, I., Rätsch, G., Gelly, S., & Schölkopf, B. (n.d.). Competitive training of mixtures of independent deep generative models. arXiv. https://doi.org/10.48550/arXiv.1804.11130","chicago":"Locatello, Francesco, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, and Bernhard Schölkopf. “Competitive Training of Mixtures of Independent Deep Generative Models.” ArXiv, n.d. https://doi.org/10.48550/arXiv.1804.11130.","short":"F. Locatello, D. Vincent, I. Tolstikhin, G. Rätsch, S. Gelly, B. Schölkopf, ArXiv (n.d.)."},"type":"preprint","extern":"1","article_processing_charge":"No","date_published":"2018-04-30T00:00:00Z","title":"Competitive training of mixtures of independent deep generative models"}