PRISM Benchmark Suite

The PRISM benchmark suite is a collection of PRISM models and associated properties, that can be used for benchmarking or testing of a new probabilistic model checking implementation or technique. These are already widely used in the verification community for this purpose.

Models (and properties) are grouped by type:

The benchmarks themselves are hosted on GitHub:

Check out the repository, or download it as a zip file.

PRISM includes useful scripts for automating benchmarking, notably prism-auto and prism-log-extract. For example, these commands:

git clone https://github.com/prismmodelchecker/prism-benchmarks
prism-auto prism-benchmarks/models/dtmcs --filter-models "states<10000" --args-list "-pow,-jac,-gs" --log logs --log-subdir
prism-log-extract logs --groupby=log_dir --groupkey=benchmark --fields=time_check > times.csv

run PRISM on all DTMC benchmark instances with less than 10,000 states using three different methods and then collate the run times in a CSV file for easy analysis.

See the benchmarking section of the PRISM Developer Resources for more details.

If you need the benchmarks in another format, don't forget that PRISM can export models in various formats.

Comments or suggestions regarding this benchmark set, as well as contributed models, are welcome.

To cite the PRISM benchmark suite, please use:

The contents of the PRISM benchmark suite are distributed under the CC-BY 4.0 license: you are free to copy, edit and redistribute it, but please give appropriate credit.