[RPNdA08] Pritam Roy, David Parker, Gethin Norman and Luca de Alfaro. Symbolic Magnifying Lens Abstraction in Markov Decision Processes. In Proc. 5th International Conference on Quantitative Evaluation of Systems (QEST'08), pages 103-112, IEEE CS Press. September 2008. [pdf] [bib] [Presents a symbolic version of magnifying lens abstraction, implemented as an extension of PRISM.]
Downloads:  pdf pdf (229 KB)  bib bib
Abstract. In this paper, we combine abstraction-refinement and symbolic techniques to fight the state-space explosion problem when model checking Markov Decision Processes (MDPs). The abstract-refinement technique, called magnifying-lens abstraction (MLA), partitions the state- space into regions and computes upper and lower bounds for reachability and safety properties on the regions, rather than the states. To compute such bounds, MLA iterates over the regions, analysing the concrete states of each region in turn - as if one was sliding a magnifying lens across the system to view the states. The algorithm adaptively refines the regions, using smaller regions where more detail is required, until the difference between the bounds is below a specified accuracy. The symbolic technique is based on Multi-Terminal Binary Decision Diagrams (MTBDDs) which have been used extensively to provide compact encodings of probabilistic models. We introduce a symbolic version of the MLA algorithm, called symbolic MLA, which combines the power of both practical techniques when verifying MDPs. An implementation of symbolic MLA in the probabilistic model checker PRISM and experimental results to illustrate the advantages of our approach are presented.