{"intvolume":" 286","publisher":"ML Research Press","tmp":{"short":"CC BY (4.0)","image":"/images/cc_by.png","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode"},"file":[{"creator":"dernst","file_name":"2025_UAI_Asadi.pdf","checksum":"4180c81bb6ed3b4f5c7a8e48d06520c6","file_size":317097,"file_id":"20313","relation":"main_file","date_updated":"2025-09-09T06:27:59Z","access_level":"open_access","success":1,"content_type":"application/pdf","date_created":"2025-09-09T06:27:59Z"}],"abstract":[{"text":"Deterministic Markov Decision Processes (DMDPs) are a mathematical framework for decision-making where the outcomes and future possible actions are deterministically determined by the current action taken. DMDPs can be viewed as a finite directed weighted graph, where in each step, the controller chooses an outgoing edge. An objective is a measurable function on runs (or infinite trajectories) of the DMDP, and the value for an objective is the maximal cumulative reward (or weight) that the controller can guarantee. We consider the classical mean-payoff (aka limit-average) objective, which is a basic and fundamental objective.\r\n\r\nHoward's policy iteration algorithm is a popular method for solving DMDPs with mean-payoff objectives. Although Howard's algorithm performs well in practice, as experimental studies suggested, the best known upper bound is exponential and the current known lower bound is as follows: For the input size I, the algorithm requires (math formular) iterations, where (math formular) hides the poly-logarithmic factors, i.e., the current lower bound on iterations is sub-linear with respect to the input size. Our main result is an improved lower bound for this fundamental algorithm where we show that for the input size I, the algorithm requires (math formular) iterations.","lang":"eng"}],"has_accepted_license":"1","ec_funded":1,"volume":286,"publication":"The 41st Conference on Uncertainty in Artificial Intelligence","arxiv":1,"oa_version":"Published Version","OA_place":"publisher","acknowledgement":"This research was partially supported by the ERC CoG 863818 (ForM-SMArt) grant and Austrian Science Fund (FWF) 10.55776/COE12.\r\n","conference":{"location":"Rio de Janeiro, Brazil","start_date":"2025-07-21","name":"UAI: Conference on Uncertainty in Artificial Intelligence","end_date":"2025-07-25"},"corr_author":"1","month":"01","scopus_import":"1","OA_type":"diamond","publication_status":"published","oa":1,"alternative_title":["PMLR"],"_id":"20299","file_date_updated":"2025-09-09T06:27:59Z","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","external_id":{"arxiv":["2506.12254"]},"title":"Lower bound on Howard policy iteration for deterministic Markov Decision Processes","citation":{"mla":"Asadi, Ali, et al. “Lower Bound on Howard Policy Iteration for Deterministic Markov Decision Processes.” The 41st Conference on Uncertainty in Artificial Intelligence, vol. 286, ML Research Press, 2025, pp. 223–32.","chicago":"Asadi, Ali, Krishnendu Chatterjee, and Jakob De Raaij. “Lower Bound on Howard Policy Iteration for Deterministic Markov Decision Processes.” In The 41st Conference on Uncertainty in Artificial Intelligence, 286:223–32. ML Research Press, 2025.","ieee":"A. Asadi, K. Chatterjee, and J. De Raaij, “Lower bound on Howard policy iteration for deterministic Markov Decision Processes,” in The 41st Conference on Uncertainty in Artificial Intelligence, Rio de Janeiro, Brazil, 2025, vol. 286, pp. 223–232.","apa":"Asadi, A., Chatterjee, K., & De Raaij, J. (2025). Lower bound on Howard policy iteration for deterministic Markov Decision Processes. In The 41st Conference on Uncertainty in Artificial Intelligence (Vol. 286, pp. 223–232). Rio de Janeiro, Brazil: ML Research Press.","ama":"Asadi A, Chatterjee K, De Raaij J. Lower bound on Howard policy iteration for deterministic Markov Decision Processes. In: The 41st Conference on Uncertainty in Artificial Intelligence. Vol 286. ML Research Press; 2025:223-232.","ista":"Asadi A, Chatterjee K, De Raaij J. 2025. Lower bound on Howard policy iteration for deterministic Markov Decision Processes. The 41st Conference on Uncertainty in Artificial Intelligence. UAI: Conference on Uncertainty in Artificial Intelligence, PMLR, vol. 286, 223–232.","short":"A. Asadi, K. Chatterjee, J. De Raaij, in:, The 41st Conference on Uncertainty in Artificial Intelligence, ML Research Press, 2025, pp. 223–232."},"date_published":"2025-01-01T00:00:00Z","publication_identifier":{"eissn":["2640-3498"]},"author":[{"first_name":"Ali","id":"02d96aae-000e-11ec-b801-cadd0a5eefbb","full_name":"Asadi, Ali","last_name":"Asadi"},{"last_name":"Chatterjee","full_name":"Chatterjee, Krishnendu","first_name":"Krishnendu","id":"2E5DCA20-F248-11E8-B48F-1D18A9856A87","orcid":"0000-0002-4561-241X"},{"first_name":"Jakob","full_name":"De Raaij, Jakob","last_name":"De Raaij"}],"type":"conference","article_processing_charge":"No","project":[{"grant_number":"863818","name":"Formal Methods for Stochastic Models: Algorithms and Applications","call_identifier":"H2020","_id":"0599E47C-7A3F-11EA-A408-12923DDC885E"}],"day":"01","language":[{"iso":"eng"}],"status":"public","page":"223-232","quality_controlled":"1","year":"2025","date_created":"2025-09-07T22:01:34Z","date_updated":"2025-09-09T06:31:20Z","ddc":["000"],"department":[{"_id":"KrCh"},{"_id":"GradSch"}],"license":"https://creativecommons.org/licenses/by/4.0/"}