Analyzing Decision Strategies


We can analyze fixed decision strategies $Z$ on an influence diagram $G$, such as ones resulting from the optimization, by generating the active paths $๐’^Z.$

Active Paths

We can generate active paths $๐ฌโˆˆ๐’^Z$ as follows.

  1. Initialize path $๐ฌ$ of length $n$ with undefined values.
  2. Fill path with chance states $๐ฌ_jโˆˆS_j$ for all $jโˆˆC.$
  3. In increasing order of decision nodes $jโˆˆD$, fill decision states by computing decision strategy $๐ฌ_j=Z_j(๐ฌ_{I(j)}).$

The path probability for all active paths is equal to the upper bound

\[โ„™(๐ฌโˆฃZ)=p(๐ฌ), \quad โˆ€๐ฌโˆˆ๐’^Z.\]

We exclude inactive paths from the analysis because their path probabilities are zero.

Utility Distribution

We define unique path utility values as


The probability mass function of the utility distribution associates each unique path utility to a probability as follows

\[โ„™(X=u)=โˆ‘_{๐ฌโˆˆ๐’^Zโˆฃ\mathcal{U}(๐ฌ)=u} p(๐ฌ),\quad โˆ€uโˆˆ\mathcal{U}^โˆ—.\]

From the utility distribution, we can calculate the cumulative distribution, statistics, and risk measures. The relevant statistics are expected value, standard deviation, skewness and kurtosis. Risk measures focus on the conditional value-at-risk (CVaR), also known as, expected shortfall.

State Probabilities

We denote paths with fixed states where $ฯต$ denotes an empty state using a recursive definition.

\[\begin{aligned} ๐’_{ฯต} &= ๐’^Z \\ ๐’_{ฯต,s_i} &= \{๐ฌโˆˆ๐’_{ฯต} โˆฃ ๐ฌ_i=s_i\} \\ ๐’_{ฯต,s_i,s_j} &= \{๐ฌโˆˆ๐’_{ฯต,s_i} โˆฃ ๐ฌ_j=s_j\},\quad jโ‰ i \end{aligned}\]

The probability of all paths sums to one

\[โ„™(ฯต) = \sum_{๐ฌโˆˆ๐’_ฯต} p(๐ฌ) = 1.\]

State probabilities for each node $iโˆˆCโˆชD$ and state $s_iโˆˆS_i$ denote how likely the state occurs given all path probabilities

\[โ„™(s_iโˆฃฯต) = \sum_{๐ฌโˆˆ๐’_{ฯต,s_i}} \frac{p(๐ฌ)}{โ„™(ฯต)} = \sum_{๐ฌโˆˆ๐’_{ฯต,s_i}} p(๐ฌ)\]

An active state is a state with positive state probability $โ„™(s_iโˆฃc)>0$ given conditions $c.$

We can generalize the state probabilities as conditional probabilities using a recursive definition. Generalized state probabilities allow us to explore how fixing active states affect the probabilities of other states. First, we choose an active state $s_i$ and fix its value. Fixing an inactive state would make all state probabilities zero. Then, we can compute the conditional state probabilities as follows.

\[โ„™(s_jโˆฃฯต,s_i) = \sum_{๐ฌโˆˆ๐’_{ฯต,s_i,s_j}} \frac{p(๐ฌ)}{โ„™(s_iโˆฃฯต)}\]

We can then repeat this process by choosing an active state from the new conditional state probabilities $s_k$ that is different from previously chosen states $kโ‰ j.$

A robust recommendation is a decision state $s_i$ where $iโˆˆD$ and subpath $c$ such the state probability is one $โ„™(s_iโˆฃc)=1.$