{"scopus_import":"1","keyword":["Management Science and Operations Research","General Mathematics","Computer Science Applications"],"article_processing_charge":"No","isi":1,"acknowledgement":"Partially supported by Austrian Science Fund (FWF) NFN Grant No RiSE/SHiNE S11407, by CONICYT Chile through grant PII 20150140, and by ECOS-CONICYT through grant C15E03.\r\n","date_updated":"2023-09-05T13:16:11Z","month":"02","title":"Finite-memory strategies in POMDPs with long-run average objectives","type":"journal_article","citation":{"chicago":"Chatterjee, Krishnendu, Raimundo J Saona Urmeneta, and Bruno Ziliotto. “Finite-Memory Strategies in POMDPs with Long-Run Average Objectives.” Mathematics of Operations Research. Institute for Operations Research and the Management Sciences, 2022. https://doi.org/10.1287/moor.2020.1116.","apa":"Chatterjee, K., Saona Urmeneta, R. J., & Ziliotto, B. (2022). Finite-memory strategies in POMDPs with long-run average objectives. Mathematics of Operations Research. Institute for Operations Research and the Management Sciences. https://doi.org/10.1287/moor.2020.1116","short":"K. Chatterjee, R.J. Saona Urmeneta, B. Ziliotto, Mathematics of Operations Research 47 (2022) 100–119.","ista":"Chatterjee K, Saona Urmeneta RJ, Ziliotto B. 2022. Finite-memory strategies in POMDPs with long-run average objectives. Mathematics of Operations Research. 47(1), 100–119.","ieee":"K. Chatterjee, R. J. Saona Urmeneta, and B. Ziliotto, “Finite-memory strategies in POMDPs with long-run average objectives,” Mathematics of Operations Research, vol. 47, no. 1. Institute for Operations Research and the Management Sciences, pp. 100–119, 2022.","ama":"Chatterjee K, Saona Urmeneta RJ, Ziliotto B. Finite-memory strategies in POMDPs with long-run average objectives. Mathematics of Operations Research. 2022;47(1):100-119. doi:10.1287/moor.2020.1116","mla":"Chatterjee, Krishnendu, et al. “Finite-Memory Strategies in POMDPs with Long-Run Average Objectives.” Mathematics of Operations Research, vol. 47, no. 1, Institute for Operations Research and the Management Sciences, 2022, pp. 100–19, doi:10.1287/moor.2020.1116."},"date_created":"2021-04-08T09:33:31Z","article_type":"original","status":"public","department":[{"_id":"GradSch"},{"_id":"KrCh"}],"oa_version":"Preprint","publication_identifier":{"eissn":["1526-5471"],"issn":["0364-765X"]},"abstract":[{"lang":"eng","text":"Partially observable Markov decision processes (POMDPs) are standard models for dynamic systems with probabilistic and nondeterministic behaviour in uncertain environments. We prove that in POMDPs with long-run average objective, the decision maker has approximately optimal strategies with finite memory. This implies notably that approximating the long-run value is recursively enumerable, as well as a weak continuity property of the value with respect to the transition function. "}],"year":"2022","issue":"1","publisher":"Institute for Operations Research and the Management Sciences","project":[{"_id":"25863FF4-B435-11E9-9278-68D0E5697425","name":"Game Theory","grant_number":"S11407","call_identifier":"FWF"}],"external_id":{"arxiv":["1904.13360"],"isi":["000731918100001"]},"_id":"9311","volume":47,"author":[{"first_name":"Krishnendu","full_name":"Chatterjee, Krishnendu","last_name":"Chatterjee","orcid":"0000-0002-4561-241X","id":"2E5DCA20-F248-11E8-B48F-1D18A9856A87"},{"first_name":"Raimundo J","full_name":"Saona Urmeneta, Raimundo J","last_name":"Saona Urmeneta","orcid":"0000-0001-5103-038X","id":"BD1DF4C4-D767-11E9-B658-BC13E6697425"},{"first_name":"Bruno","full_name":"Ziliotto, Bruno","last_name":"Ziliotto"}],"doi":"10.1287/moor.2020.1116","day":"01","publication_status":"published","publication":"Mathematics of Operations Research","date_published":"2022-02-01T00:00:00Z","user_id":"c635000d-4b10-11ee-a964-aac5a93f6ac1","page":"100-119","main_file_link":[{"url":"https://arxiv.org/abs/1904.13360","open_access":"1"}],"oa":1,"language":[{"iso":"eng"}],"quality_controlled":"1","intvolume":" 47"}