www.prismmodelchecker.org
[BP22] Edoardo Bacci and David Parker. Verified Probabilistic Policies for Deep Reinforcement Learning. In Proc. 14th International Symposium NASA Formal Methods (NFM'22), volume 13260 of LNCS, pages 193-212, Springer. May 2022. [pdf] [bib] [Presents techniques to produce formal guarantees on the safe execution of probabilistic policies for deep reinforcement learning, building on PRISM's model checking engines.]
Downloads:  pdf pdf (1.12 MB)  bib bib
Notes: The original publication is available at link.springer.com.
Links: [Google] [Google Scholar]
Abstract. Deep reinforcement learning is an increasingly popular technique for synthesising policies to control an agent's interaction with its environment. There is also growing interest in formally verifying that such policies are correct and execute safely. Progress has been made in this area by building on existing work for verification of deep neural networks and of continuous-state dynamical systems. In this paper, we tackle the problem of verifying probabilistic policies for deep reinforcement learning, which are used to, for example, tackle adversarial environments, break symmetries and manage trade-offs. We propose an abstraction approach, based on interval Markov decision processes, that yields probabilistic guarantees on a policy's execution, and present techniques to build and solve these models using abstract interpretation, mixed-integer linear programming, entropy-based refinement and probabilistic model checking. We implement our approach and illustrate its effectiveness on a selection of reinforcement learning benchmarks.

Publications