www.prismmodelchecker.org
[HHH+19] Ernst Moritz Hahn, Arnd Hartmanns, Christian Hensel, Michaela Klauck, Joachim Klein, Jan Křetínský, David Parker, Tim Quatmann, Enno Ruijters and Marcel Steinmetz. The 2019 Comparison of Tools for the Analysis of Quantitative Formal Models (QComp 2019 Competition Report). In Proc. 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS'19), volume 11429 of LNCS, pages 69-92, Springer. April 2019. [pdf] [bib] [Summarises the first Quantitative Formal Models competition (QComp) featuring PRISM and 8 other tools.]
Downloads:  pdf pdf (486 KB)  bib bib
Notes: The original publication is available at link.springer.com.
Links: [Google] [Google Scholar]
Abstract. Quantitative formal models capture probabilistic behaviour, real-time aspects, or general continuous dynamics. A number of tools support their automatic analysis with respect to dependability or performance properties. QComp 2019 is the first, friendly competition among such tools. It focuses on stochastic formalisms from Markov chains to probabilistic timed automata specified in the Jani model exchange format, and on probabilistic reachability, expected-reward, and steady-state properties. QComp draws its benchmarks from the new Quantitative Verification Benchmark Set. Participating tools, which include probabilistic model checkers and planners as well as simulation-based tools, are evaluated in terms of performance, versatility, and usability. In this paper, we report on the challenges in setting up a quantitative verification competition, present the results of QComp 2019, summarise the lessons learned, and provide an outlook on the features of the next edition of QComp.

Publications