Notes:
This is an extended version of [KNP06b].
|
Links:
[Google]
[Google Scholar]
|
Abstract.
In this paper we present a novel abstraction technique for Markov decision processes (MDPs),
which are widely used for modelling systems that exhibit both probabilistic and nondeterministic behaviour.
In the field of model checking, abstraction has proved an extremely successful tool to combat the state-space explosion problem.
In the probabilistic setting, however, little practical progress has been made in this area.
We propose an abstraction method for MDPs based on stochastic two-player games.
The key idea behind this approach is to maintain a separation between nondeterminism
present in the original MDP and nondeterminism introduced through abstraction,
each type being represented by a different player in the game.
Crucially, this allows us to obtain distinct lower and upper bounds for both
the best and worst-case performance (minimum or maximum probabilities) of the MDP.
We have implemented our techniques and illustrate their practical utility by applying them to a
quantitative analysis of the Zeroconf dynamic network configuration protocol.
|