---
OA_place: publisher
OA_type: hybrid
_id: '21661'
abstract:
- lang: eng
  text: Model checking undiscounted reachability and expected-reward properties on
    Markov decision processes (MDPs) are key for the verification of systems that
    act under uncertainty. Popular algorithms are policy iteration and variants of
    value iteration; in tool competitions, most participants rely on the latter. These
    algorithms generally need worst-case exponential time. However, the problem can
    equally be formulated as a linear programme, solvable in polynomial time. In this
    paper, we give a detailed overview of today’s state-of-the-art algorithms for
    MDP model checking with a focus on performance and correctness. We highlight their
    fundamental differences, and describe various optimizations and implementation
    variants. We experimentally compare floating-point and exact-arithmetic implementations
    of all algorithms on three benchmark sets using two probabilistic model checkers.
    Our results show that (optimistic) value iteration is a sensible default, but
    other algorithms are preferable in specific settings. This paper thereby provides
    a guide for MDP verification practitioners—tool builders and users alike.
acknowledgement: "This research was funded by the European Union’s Horizon 2020 research
  and innovation programme under Marie Skłodowska-Curie grant agreements 101008233
  (MISSION)\r\nand 101034413 (IST-BRIDGE), by the Interreg North Sea project STORM_SAFE,
  by a KI-Starter grant from the Ministerium für Kultur und Wissenschaft NRW, by NWO
  VENI grant no. 639.021.754, and by NWO VIDI grant VI.Vidi.223.110 (TruSTy). Experiments
  were performed with computing resources granted by RWTH Aachen University under
  project rwth1632."
article_processing_charge: Yes (in subscription journal)
article_type: original
author:
- first_name: Arnd
  full_name: Hartmanns, Arnd
  last_name: Hartmanns
- first_name: Sebastian
  full_name: Junges, Sebastian
  last_name: Junges
- first_name: Tim
  full_name: Quatmann, Tim
  last_name: Quatmann
- first_name: Maximilian
  full_name: Weininger, Maximilian
  id: 02ab0197-cc70-11ed-ab61-918e71f56881
  last_name: Weininger
  orcid: 0000-0002-0163-2152
citation:
  ama: Hartmanns A, Junges S, Quatmann T, Weininger M. The revised practitioner’s
    guide to MDP model checking algorithms. <i>International Journal on Software Tools
    for Technology Transfer</i>. 2026. doi:<a href="https://doi.org/10.1007/s10009-026-00848-y">10.1007/s10009-026-00848-y</a>
  apa: Hartmanns, A., Junges, S., Quatmann, T., &#38; Weininger, M. (2026). The revised
    practitioner’s guide to MDP model checking algorithms. <i>International Journal
    on Software Tools for Technology Transfer</i>. Springer Nature. <a href="https://doi.org/10.1007/s10009-026-00848-y">https://doi.org/10.1007/s10009-026-00848-y</a>
  chicago: Hartmanns, Arnd, Sebastian Junges, Tim Quatmann, and Maximilian Weininger.
    “The Revised Practitioner’s Guide to MDP Model Checking Algorithms.” <i>International
    Journal on Software Tools for Technology Transfer</i>. Springer Nature, 2026.
    <a href="https://doi.org/10.1007/s10009-026-00848-y">https://doi.org/10.1007/s10009-026-00848-y</a>.
  ieee: A. Hartmanns, S. Junges, T. Quatmann, and M. Weininger, “The revised practitioner’s
    guide to MDP model checking algorithms,” <i>International Journal on Software
    Tools for Technology Transfer</i>. Springer Nature, 2026.
  ista: Hartmanns A, Junges S, Quatmann T, Weininger M. 2026. The revised practitioner’s
    guide to MDP model checking algorithms. International Journal on Software Tools
    for Technology Transfer.
  mla: Hartmanns, Arnd, et al. “The Revised Practitioner’s Guide to MDP Model Checking
    Algorithms.” <i>International Journal on Software Tools for Technology Transfer</i>,
    Springer Nature, 2026, doi:<a href="https://doi.org/10.1007/s10009-026-00848-y">10.1007/s10009-026-00848-y</a>.
  short: A. Hartmanns, S. Junges, T. Quatmann, M. Weininger, International Journal
    on Software Tools for Technology Transfer (2026).
date_created: 2026-04-05T22:01:32Z
date_published: 2026-03-09T00:00:00Z
date_updated: 2026-04-07T09:52:54Z
day: '09'
ddc:
- '000'
department:
- _id: KrCh
doi: 10.1007/s10009-026-00848-y
ec_funded: 1
has_accepted_license: '1'
keyword:
- Quantitative model checking
- Markov decision process
- Linear programming
- Value iteration
- Policy iteration
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1007/s10009-026-00848-y
month: '03'
oa: 1
oa_version: Published Version
project:
- _id: fc2ed2f7-9c52-11eb-aca3-c01059dda49c
  call_identifier: H2020
  grant_number: '101034413'
  name: 'IST-BRIDGE: International postdoctoral program'
publication: International Journal on Software Tools for Technology Transfer
publication_identifier:
  eissn:
  - 1433-2787
  issn:
  - 1433-2779
publication_status: epub_ahead
publisher: Springer Nature
quality_controlled: '1'
related_material:
  record:
  - id: '21668'
    relation: software
    status: public
scopus_import: '1'
status: public
title: The revised practitioner’s guide to MDP model checking algorithms
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2026'
...
---
_id: '5549'
abstract:
- lang: eng
  text: "This repository contains the experimental part of the CAV 2015 publication
    Counterexample Explanation by Learning Small Strategies in Markov Decision Processes.\r\nWe
    extended the probabilistic model checker PRISM to represent strategies of Markov
    Decision Processes as Decision Trees.\r\nThe archive contains a java executable
    version of the extended tool (prism_dectree.jar) together with a few examples
    of the PRISM benchmark library.\r\nTo execute the program, please have a look
    at the README.txt, which provides instructions and further information on the
    archive.\r\nThe archive contains scripts that (if run often enough) reproduces
    the data presented in the publication."
article_processing_charge: No
author:
- first_name: Andreas
  full_name: Fellner, Andreas
  id: 42BABFB4-F248-11E8-B48F-1D18A9856A87
  last_name: Fellner
citation:
  ama: 'Fellner A. Experimental part of CAV 2015 publication: Counterexample Explanation
    by Learning Small Strategies in Markov Decision Processes. 2015. doi:<a href="https://doi.org/10.15479/AT:ISTA:28">10.15479/AT:ISTA:28</a>'
  apa: 'Fellner, A. (2015). Experimental part of CAV 2015 publication: Counterexample
    Explanation by Learning Small Strategies in Markov Decision Processes. Institute
    of Science and Technology Austria. <a href="https://doi.org/10.15479/AT:ISTA:28">https://doi.org/10.15479/AT:ISTA:28</a>'
  chicago: 'Fellner, Andreas. “Experimental Part of CAV 2015 Publication: Counterexample
    Explanation by Learning Small Strategies in Markov Decision Processes.” Institute
    of Science and Technology Austria, 2015. <a href="https://doi.org/10.15479/AT:ISTA:28">https://doi.org/10.15479/AT:ISTA:28</a>.'
  ieee: 'A. Fellner, “Experimental part of CAV 2015 publication: Counterexample Explanation
    by Learning Small Strategies in Markov Decision Processes.” Institute of Science
    and Technology Austria, 2015.'
  ista: 'Fellner A. 2015. Experimental part of CAV 2015 publication: Counterexample
    Explanation by Learning Small Strategies in Markov Decision Processes, Institute
    of Science and Technology Austria, <a href="https://doi.org/10.15479/AT:ISTA:28">10.15479/AT:ISTA:28</a>.'
  mla: 'Fellner, Andreas. <i>Experimental Part of CAV 2015 Publication: Counterexample
    Explanation by Learning Small Strategies in Markov Decision Processes</i>. Institute
    of Science and Technology Austria, 2015, doi:<a href="https://doi.org/10.15479/AT:ISTA:28">10.15479/AT:ISTA:28</a>.'
  short: A. Fellner, (2015).
contributor:
- first_name: Jan
  id: 44CEF464-F248-11E8-B48F-1D18A9856A87
  last_name: Kretinsky
datarep_id: '28'
date_created: 2018-12-12T12:31:29Z
date_published: 2015-08-13T00:00:00Z
date_updated: 2025-09-23T08:23:15Z
day: '13'
ddc:
- '004'
department:
- _id: KrCh
- _id: ToHe
doi: 10.15479/AT:ISTA:28
ec_funded: 1
file:
- access_level: open_access
  checksum: b8bcb43c0893023cda66c1b69c16ac62
  content_type: application/zip
  creator: system
  date_created: 2018-12-12T13:02:31Z
  date_updated: 2020-07-14T12:47:00Z
  file_id: '5597'
  file_name: IST-2015-28-v1+2_Fellner_DataRep.zip
  file_size: 49557109
  relation: main_file
file_date_updated: 2020-07-14T12:47:00Z
has_accepted_license: '1'
keyword:
- Markov Decision Process
- Decision Tree
- Probabilistic Verification
- Counterexample Explanation
license: https://creativecommons.org/publicdomain/zero/1.0/
month: '08'
oa: 1
oa_version: Published Version
project:
- _id: 2581B60A-B435-11E9-9278-68D0E5697425
  call_identifier: FP7
  grant_number: '279307'
  name: 'Quantitative Graph Games: Theory and Applications'
- _id: 25832EC2-B435-11E9-9278-68D0E5697425
  call_identifier: FWF
  grant_number: S 11407_N23
  name: Rigorous Systems Engineering
publisher: Institute of Science and Technology Austria
publist_id: '5564'
related_material:
  record:
  - id: '1603'
    relation: popular_science
    status: public
status: public
title: 'Experimental part of CAV 2015 publication: Counterexample Explanation by Learning
  Small Strategies in Markov Decision Processes'
tmp:
  image: /images/cc_0.png
  legal_code_url: https://creativecommons.org/publicdomain/zero/1.0/legalcode
  name: Creative Commons Public Domain Dedication (CC0 1.0)
  short: CC0 (1.0)
type: research_data
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2015'
...
