Specification of properties to be checked in PRISM-games is based on the temporal logic rPATL [CFK+12, CFK+13b], which combines elements of the probabilistic temporal logic PCTL for MDPs, the logic ATL for games and agent-based systems, and extensions of PRISM's reward operators. A typical rPATL property is:

<<1>> P>=0.99 [ F<=5 c=2 ]

which states that player 1 has a strategy
to ensure that the probability of reaching a state satisfying
c=2
within 5 time-steps
is at least 0.99, regardless of the strategies of any other player.
In general, `<<...>>`

contains a *coalition* of players for which strategies are required.
Some further (generic) example properties of the P operator are listed below:

// Players 1, 2 and 3 can guarantee that the probability // of reaching a "b"-state, whilst passing only through "a"-states, is > 0.5 <<1,2,3>> P>0.5 [ "a" U "b" ] // The maximum probability with which players p1 and p2 // can guarantee that an "end"-state is reached <<p1,p3>> Pmax=? [ F "end"] // Players 4 and 5 can ensure that the probability // of reaching an "end"-state within 100 time-steps is < 0.95 <<4,5>> P<0.95 [ F<=100 "end" ]

Specifications of rPATL coalitions (within `<<..>>`) can use either
the player names or their indices separated by commas.
Player numbering is determined by the order or their specification in the model file, starting from 1.
To specify the empty set of players, use `<<>>`.

PRISM-games also has a variety of *reward*-based properties
(see [CFK+12, CFK+13b] for details):

// The maximum expected value of reward "r" accumulated before // reaching "success" that can be guaranteed by player p1 <<p1>> R{"r"}max=? [ F "success" ] // Under any possible strategy (for all players), // the expected reward accumulated before reaching "finished" is > 100 <<>> R>100 [ F0 "finished" ] // The minimum expected reward accumulated before reaching "fail" // that can be guaranteed by players p2 and p5 <<p2,p5>> Rmin=? [ Fc "fail" ]

Version 2.0 of the tool adds further reward-based properties:

- expected total rewards,
- expected mean-payoff,
- ratios of expected mean-payoffs, and
- almost-sure satisfaction of mean-payoff or ratio rewards.

To use expected total rewards, the game must be a *stopping game*
(i.e., terminal states with zero reward must be reached almost surely under all strategies).
For expected long-run objectives or ratio objectives, games must be *controllable multichain*
(i.e., the sets of states that can occur in any maximal end component are almost surely reachable).

Examples of these properties are:

// The expected total value for reward structure "r" // that can be guaranteed by player 1 is at least 2.5 <<1>> R{"r"}>=2.5 [ C ] // The expected long-run average (mean payoff) value for reward structure "r" // that can be guaranteed by player 1 is at least 2.5 <<1>> R{"r"}>=2.5 [ S ] // The ratio of expected long-run average (mean payoff) values for reward structures "r" and "c" // that can be guaranteed by player 1 is at least 0.5 <<1>> R{"r"}/{"c"}>=0.5 [ S ] // Player 1 can guarantee that, with probability 1, // the expected long-run average (mean payoff) value for reward structure "r" is at least 1.5 <<1>> P>=1 [ R(path){"r"}>=1.5 [ S ] ] // Player 1 can guarantee that, with probability 1, // the ratio of expected long-run average (mean payoff) values // for reward structures "r" and "c" is at least 0.5 <<1>> P>=1 [ R(path){"r"}/{"c"}>=0.5 [ S ] ]

PRISM-games 2.0 also adds *multi-objective* properties,
in which multiple objectives can be combined using a Boolean expression comprising the operators
`&`

(conjunction), `|`

(disjunction), `!`

(negation) and `=>`

(implication).
Currently, the objectives combined together must of the same type (e.g. expected total reward).
For the case of almost-sure satisfaction properties,
objectives must be combined into a conjunction, not an arbitrary Boolean expression.

// Player 1 can guarantee that the expected total reward values // for reward structures "r1" and "r2" are at least v1 and v2, respectively <<1>> ( R{"r1"}>=v1 [ C ] & R{"r2"}>=v2 [ C ] ) // Player 1 can guarantee that, whenever the ratio of // expected long-run average values for "r1" and "c" is at most v1, // then the ratio for "r2" and "c" is at least v. <<1>> ( R{"r1"}/{"c"}<=v1 [ C ] => R{"r2"}/{"c"}>=v2 [ C ] )

Finally, there is support for *exact* value queries for probability and cumulative reward properties,
based on the techniques in [CFK+12, CFK+13b].

// Players agent1 and scheduler have a strategy to ensure // that the probability of reaching "end" is exactly 0.35 <<agent1,scheduler>> P=0.35 [ F "end" ] // Players p2 and p5 have a strategy to ensure that the // expected reward accumulated before reaching "success" is exactly 100 <<p2,p5>> R=100 [ Fc "success" ]