Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar

__________________________________________________________________________

Classes of States and Stationary Distributions

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

### Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

### Section Starter Question

Randomly distribute three balls between two urns, labeled $A$ and $B$. Each period, select an urn at random, and if it is not empty, remove a ball from that urn and put it in the other urn. How could you determine what fraction of time is urn $A$ empty? What is that fraction of time? Does this depend on how the balls are initially distributed between the two urns?

_______________________________________________________________________________________________

### Key Concepts

1. The states of a Markov chain partition into equivalence classes according to their type: communicating, periodic versus aperiodic, recurrent versus transient, and ergodic.
2. Every irreducible ergodic Markov chain has a unique non-negative solution of $\begin{array}{llll}\hfill \sum _{i}{\pi }_{i}{P}_{ij}& ={\pi }_{j}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \sum _{i}{\pi }_{i}& =1\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$

called a stationary distribution.

__________________________________________________________________________

### Vocabulary

1. The states of a Markov chain partition into equivalence classes according to their type: communicating, periodic versus aperiodic, recurrent versus transient, and ergodic.
2. Any vector ${\left(\pi \right)}_{j}$ satisfying the set of the equations $\begin{array}{llll}\hfill \sum _{i}{\pi }_{i}{P}_{ij}& ={\pi }_{j}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \sum _{i}{\pi }_{i}& =1\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$

is a stationary probability distribution of the Markov chain.

__________________________________________________________________________

### Mathematical Ideas

#### Classiﬁcation of States and Stationary Distributions

Deﬁnition. State $j$ is accessible from state $i$ if ${P}_{ij}^{n}>0$ for some $n\ge 1$. That is, state $j$ is accessible from state $i$ if and only if it is possible that the process will enter state $j$.

Deﬁnition. Two states communicate if state $j$ is accessible from state $i$ and state $i$ is accessible from $j$.

Remark. Note that it is possible that $j$ is accessible from $i$ but that $i$ and $j$ do not communicate. This would happen if $j$ is an absorbing state accessible from $i$.

Proposition 1. Being able to communicate is an equivalence relation, so that the states of a Markov chain partition into disjoint classes.

Proof. Left as an exercise. □

Deﬁnition. The period of a state $i$, written as $d\left(i\right)$, is the greatest common divisor of all integers $n\ge 1$ for which ${P}_{ii}^{n}>0$. If ${P}_{ii}^{n}=0$ for all $n$, deﬁne $d\left(i\right)=0$.

Proposition 2.

1. If $i$ communicates with $j$, then $d\left(i\right)=d\left(j\right)$.
2. If state $i$ has period $d\left(i\right)$, then there exists an integer $N$ depending on $i$ such that for all integers $n\ge N$, ${P}_{ii}^{nd\left(i\right)}>0$. That is, return to state $i$ can occur at all suﬃciently large multiples of the period $d\left(i\right)$.
3. If ${P}_{ji}^{m}>0$, then ${P}_{ji}^{m+nd\left(i\right)}>0$ for all suﬃciently large $n$.

Proof. Left as an exercise. □

Deﬁnition. A Markov chain in which each state has period $1$ is aperiodic.

Deﬁnition. The Markov chain is irreducible if it has only one class.

Deﬁnition. For any state $i$, let ${f}_{i}$ denote the probability that starting in $i$ the process will ever reenter state $i$. State $i$ is recurrent if ${f}_{i}=1$ and transient if ${f}_{i}<1$.

Proposition 3. If state $i$ is recurrent, then the process will reenter $i$ inﬁnitely often.

Proof. Left as an exercise. □

Proposition 4.

1. State $i$ is recurrent if
$\sum _{n=1}^{\infty }{P}_{ij}^{n}=\infty .$

2. State $i$ is transient if
$\sum _{n=1}^{\infty }{P}_{ij}^{n}<\infty .$

Proof. Left as an exercise. □

Deﬁnition. If a state $i$ is recurrent, then it is positive recurrent if, starting in $i$, the expected time until the process returns to state $i$ is ﬁnite.

Proposition 5. In a ﬁnite-state Markov chain all recurrent states are positive recurrent.

Proof. Left as an exercise. □

Deﬁnition. Positive recurrent and aperiodic states are ergodic.

Theorem 6. For an irreducible ergodic Markov chain $\underset{n\to \infty }{lim}{P}_{ij}^{n}$ exists and is independent of $i$. Furthermore, letting

${\pi }_{j}=\underset{n\to \infty }{lim}{P}_{ij}^{n}$

then ${\pi }_{j}$ is the unique non-negative solution of

$\begin{array}{llll}\hfill \sum _{i}{\pi }_{i}{P}_{ij}& ={\pi }_{j}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \sum _{i}{\pi }_{i}& =1\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$

Deﬁnition. Any vector ${\left(\pi \right)}_{j}$ satisfying the set of the equations is a stationary probability distribution of the Markov chain. A Markov chain started according to a stationary distribution will have this distribution for all future times.

Remark. A limiting distribution is always a stationary distribution, but the converse is not true. A Markov chain may have a stationary distribution but no limiting distribution. For example, the periodic Markov chain whose transition probability matrix is

$P=\left(\begin{array}{cc}\hfill 0\hfill & \hfill 1\hfill \\ \hfill 1\hfill & \hfill 0\hfill \end{array}\right)$

has no limiting distribution but $\pi =\left(1∕2,1∕2\right)$ is a stationary distribution. Notice that this Markov chain is not ergodic.

Remark. Note that this theorem says that a probability transition matrix for an irreducible ergodic Markov chain has a left eigenvector with corresponding eigenvalue $1$. This is a special case of the more general Perron-Frobenius Theorem.

#### Amusing Example of a Stationary Distribution

The following example is adapted from the solution in Lessard. based on the problem originally posed in The Riddler.. The interest is partly in the humorous context and partly that the context obscures that the Markov chain has more states than is at ﬁrst obvious.

There is a bathroom in your oﬃce building that has only one toilet. There is a small sign stuck to the outside of the door that you can slide from “Vacant” to “Occupied” so that no one else will try the door handle (theoretically) when you are inside. Unfortunately, people often forget to slide the sign to “Occupied” when entering, and they often forget to slide it to “Vacant” when exiting.

Assume that $1∕3$ of bathroom users don’t notice the sign upon entering or exiting. Therefore, whatever the sign reads before their visit, it still reads the same thing during and after their visit. Another $1∕3$ of the users notice the sign upon entering and make sure that it says “Occupied” as they enter. However, half the time they forget to slide it to “Vacant” when they exit. The remaining $1∕3$ of the users are very conscientious: They make sure the sign reads “Occupied” when they enter, and then they slide it to “Vacant” when they exit. Finally, assume that the bathroom is occupied exactly half of the time, all day, every day.

1. If you go to the bathroom and see that the sign on the door reads “Occupied,” what is the probability that the bathroom is actually occupied?
2. If the sign reads “Vacant,” what is the probability that the bathroom actually is vacant?
3. Extra credit: What happens as the percentage of conscientious bathroom users changes?

The ﬁrst step is to deﬁne the states and transition probabilities. We might think that because the bathroom can be either occupied or vacant, and the sign in front can either read “Vacant” or “Occupied”, that there should be four states, one for each possible pair of occupation state and sign. However, this is not the case. Consider the state “bathroom is occupied and the sign says its occupied”. We must distinguish between the cases where the person occupying the bathroom is conscientious (they will deﬁnitely slide the sign to “Vacant” when they leave) or not (they will leave the sign as “Occupied” after they leave).

We must therefore add more states to our Markov chain that correspond to the diﬀerent ways in which the bathroom can be occupied. Consider that there are three types of users: oblivious, forgetful, and conscientious, each of proportion $1∕3$.

Imagine a sequence of short times, say every minute. Using the assumption that the bathroom is occupied exactly half of the time, all day, every day we can generalize slightly to assume that the transition from state to state each minute is itself a Markov chain with transition probability

$\phantom{\rule{3.26288pt}{0ex}}\left(\right)$

Note that the stationary distribution is $\left(1∕2,1∕2\right)$ which is consistent with the assumption that the bathroom is occupied exactly half of the time, all day, every day.

Now the possible transitions at each minute are:

1. A Conscientious person is in the bathroom, and it is Occupied and the sign says Occupied. We will name this state as $OCO$. Either the person stays and the state remains Occupied with sign Occupied with probability $1∕2$, or the conscientious person leaves and the state is $VV$ with probability $1∕2$.
2. A Forgetful person is in the bathroom, and it is Occupied and the sign says Occupied. We will name this state as $OFO$. Either the person stays and the state remains Occupied with sign Occupied with probability $1∕2$, or the forgetful person leaves and the state is $VO$ with probability $1∕2\cdot 1∕2=1∕4$ or the forgetful person leaves, changing the sign and the state is $VV$ with probability $1∕2\cdot 1∕2=1∕4$.
3. An Oblivious person is in the bathroom, and it is Occupied and the sign says Occupied. We will name this state as $OOO$. Either the person stays and the state remains Occupied with sign Occupied with probability $1∕2$, or the oblivious person leaves and the state is $VO$ with probability $1∕2$.
4. A Oblivious person is in the bathroom, and it is Occupied but the sign says Vacant. We will name this state as $OOV$. Either the person stays and the state remains Occupied with sign Vacant with probability $1∕2$, or the oblivious person leaves and the state is $VO$ with probability $1∕2$.
5. The bathroom is Vacant, but the sign says Occupied. With probability $1∕2$ some person will approach, nevertheless try the door and determine the true state of vacancy and enter, dealing with the sign according to their type. Then the transition is to $OCO$ with probability $1∕6$, $OFO$ with probability $1∕6$ and $OOO$ with probability $1∕6$. With probability $1∕2$ the state remains $VO$.
6. The bathroom is Vacant, and the sign says Vacant. With probability $1∕2$ some person will approach, nevertheless try the door and easily determine the true state of vacancy and enter, dealing with the sign according to their type. Then the transition is to $OCO$ with probability $1∕6$, $OFO$ with probability $1∕6$ and $OOV$ with probability $1∕6$. With probability $1∕2$ the state remains $VV$.

A state transition diagram is in Figure 1.

Then the transition probability matrix is

$P=\left(\begin{array}{cccccc}\hfill 1∕2\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1∕2\hfill \\ \hfill 0\hfill & \hfill 1∕2\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1∕4\hfill & \hfill 1∕4\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 1∕2\hfill & \hfill 0\hfill & \hfill 1∕2\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 1∕2\hfill & \hfill 1∕2\hfill & \hfill 0\hfill \\ \hfill 1∕6\hfill & \hfill 1∕6\hfill & \hfill 1∕6\hfill & \hfill 0\hfill & \hfill 1∕2\hfill & \hfill 0\hfill \\ \hfill 1∕6\hfill & \hfill 1∕6\hfill & \hfill 0\hfill & \hfill 1∕6\hfill & \hfill 0\hfill & \hfill 1∕2\hfill \\ \hfill \hfill \end{array}\right)$

The stationary distribution is then the solution of

$\pi P=\pi ,\phantom{\rule{2em}{0ex}}\sum _{i}{\pi }_{i}=1$

with solution

$\begin{array}{llllllllllll}\hfill {\pi }_{OCO}& =\frac{1}{6},\phantom{\rule{2em}{0ex}}& \hfill {\pi }_{OFO}& =\frac{1}{6},\phantom{\rule{2em}{0ex}}& \hfill {\pi }_{OOO}& =\frac{1}{12},\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {\pi }_{OOV}& =\frac{1}{12},\phantom{\rule{2em}{0ex}}& \hfill {\pi }_{VO}& =\frac{1}{4},\phantom{\rule{2em}{0ex}}& \hfill {\pi }_{VV}& =\frac{1}{4}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Note that the bathroom is still Vacant half the time, although the time is split equally over the $2$ sign states.

#### Sources

This section is adapted from: Ross, Introduction to Probability Models, Taylor and Karlin, An Introduction to Stochastic Modeling, Karlin and Taylor, A Second Course in Stochastic Processes and Lessard Lessard..

_______________________________________________________________________________________________

### Algorithms, Scripts, Simulations

#### Scripts

__________________________________________________________________________

### Problems to Work for Understanding

1. Provide examples of the classiﬁcations of states:
1. A trivial two-state Markov chain in which neither state is accessible from the other.
2. A trivial two-state Markov chain in which both states communicate.
3. A Markov chain in which all states have period $2$.
4. A three-state Markov chain with two states absorbing and one transient state. What states are communicating in this example?
5. An ergodic Markov chain.
2. Randomly distribute three balls between two urns, labeled $A$ and $B$. Each period, select an urn at random, and if it is not empty, remove a ball from that urn and put it in the other urn. Make a Markov chain model of this situation and classify all states. What fraction of time is urn $A$ empty? Does this depend on how the balls are initially distributed between the two urns?
3. Assume the fraction of oblivious, forgetful, and conscientious users are $p$, $q$, and $r$ respectively, with $0\le p,q,r\le 1$ and $p+q+r=1$. Solve the bathroom problem under this general assumption.
4. Assume the fraction of oblivious, forgetful, and conscientious users are $p$, $q$, and $r$ respectively, with $0\le p,q,r\le 1$ and $p+q+r=1$. Further assume the bathroom is occupied $2∕3$ of the time, and vacant $1∕3$ of the time. Solve the bathroom problem under this general assumption.
5. Assume the fraction of oblivious, forgetful, and conscientious users are $p$, $q$, and $r$ respectively, with $0\le p,q,r\le 1$ and $p+q+r=1$. Further assume the bathroom is occupied $1-v$ of the time, and vacant $v$ of the time, where $0. Solve the bathroom problem under this general assumption. What happens as $v\to 0$ or $v\to 1$?
6. Assume the fraction of oblivious, forgetful, and conscientious users are $p$, $q$, and $r$ respectively, with $0\le p,q,r\le 1$ and $p+q+r=1$. Also assume that the oblivious people spend twice as long in the bathroom as the conscientious or forgetful people. Assume the bathroom is vacant $v$ of the time, where $0. Are all these assumptions consistent? Solve the bathroom problem under this general assumption.

__________________________________________________________________________

### References

[1]   S. Karlin and H. Taylor. A Second Course in Stochastic Processes. Academic Press, 1981.

[2]   Sheldon M. Ross. Introduction to Probability Models. Elsevier, 6th edition, 1997.

[3]   H. M. Taylor and Samuel Karlin. An Introduction to Stochastic Modeling. Academic Press, third edition, 1998.

__________________________________________________________________________

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.