[an error occurred while processing this directive] [an error occurred while processing this directive]


QuestionofDay

Question of the Day


Key Concepts

Key Concepts

  1. A stochastic process is, in the simplest terms is a random function. That is to say a specific outcome of the process can only be defined in terms of the probability of it happening.
  2. A Markov Process is one in which the only input which affects the process at time n are the previous states. If the probability of state transition in our Markov Process does not depend on the time, then we have a Markov Chain.
  3. We can find the transition probability for multiple time steps by the matrix Pn, and this is known as (the finite Markov Chain version of) the Chapman-Kolmogorov Theorem.
  4. Absorbing states are states which once entered are never left.
  5. The limiting matrix can be found by putting the matrix in canonical form, then finding the fundamental matrix, and then multiplying blocks from the canonical matrix.


Vocabulary

Vocabulary

  1. Stochastic Processes - A stochastic process is, in the simplest terms is a random function. That is to say a specific outcome of the process can only be defined in terms of the probability of it happening.
  2. Finite Stochastic Processes The outcome of a finite stochastic process is one of a predetermined number of states.
  3. Finite Markov Process - A Markov Process is one in which the only input which affects the process at time n are the previous states, O1,O2,...,On-1.
  4. Markov Chain. This means the transition probabilities from Si to Sj are time invariant.
  5. We say a chain is ergodic if it is possible to move from any one state to any other, not necessarily in one move.
  6. We say a chain is regular if some power of the transition matrix has all positive entries.
  7. We call states which once entered are never left absorbing states.


Mathematical Ideas

Mathematical Ideas

These notes were prepared by LT Grant and are used with permission. Pieces of this section are adapted from: Finite Markov Chains, Kemeny and Snell, 1960, and from Random Walks and Electric Networks, Doyle and Snell, 1984.

Introduction

Before we delve too deeply into Markov Chains and their properties, it might be beneficial to understand where they fit in the larger landscape of mathematics.

Further Intro to Markov Chains

Before we proceed more, we will give an example which should help to put some of these concepts in perspective, as well as to help introduce some other ideas.

We consider the magical Land of Oz where the weather follows a very predictable pattern. If it is raining today, then there is a 50% chance of raining again tomorrow, and a 25% chance of either having a nice day or a snowy day tomorrow. Similarly if it is snowing, we have a 50% chance of again having snow, and a 25% chance of either having a nice day, or a rainy day. Also in the land of Oz they never have two nice days in a row, and they equally have rain and snow the day after a nice day.

With this information, one can ask many questions. The first thing we do is to label our states, R,N,S, although one could just as easily label them S1,S2,S3. With the information above, we can define pRR = 1/2, pRN = 1/4 and so on. We note that writing out every possible transition would give us 32 distinct possibilities. While this isn’t difficult for our case with three states, we note that if we had n states, then we would have n2 probabilities.

Thus we use matrices to represent the transition probabilities for our Markov Chain. In our example, our transition matrix is (the row and column labels of R, N and S are for convenience only and are not usually included in the display of the matrix):

 ( ) R N S P = R 1/2 1/4 1/4 N 1/2 0 1/2 S 1/4 1/4 1/2

We find the probability of going from Si to Sj by looking at Pi,j. Let it be raining today in Oz. I want to know the probability that tomorrow will be nice. To do this, you look at the R to N or equivalently the (1, 2) entry, and would find that there is a 1/4 chance of a nice day. Also note that all the rows sum to 1, thus this is a Stochastic Matrix.

Now how do we calculate the chance it is nice in two days given it is rainy today? We could use the Law of Total Probability (sometimes also called Bayes’ Rule in elementary probability) to calculate this probability. This requires us to consider every possible weather combination and their respective probabilities. Mixing the R = S1, N = S2, S = S3 notation, we find that for this example

 sum 3 Pr[On+2 = N |On = R] = p1,jpj,2 j=1

For two days, this is not too tedious, but what about the probability it will be nice a week from today given it is rainy today?

This is where the use of matrix notation also becomes very convenient. Notice that the probability above of going from R to N two days hence is the (1.2) entry in the matrix product of P with itself. More generally, we can find the transition probability through n steps by the matrix Pn, and this is known as (the finite Markov Chain version of) the Chapman-Kolmogorov Theorem. Thus consider the following matrices from our example:

 ( ) R N S R 7/16 3/16 6/16 P2 = N 6/16 4/16 6/16 S 6/16 3/16 7/16

and

 ( ) R N S P7 = R 0.400024 0.200012 0.399963 N 0.400024 0.199951 0.400024 S 0.399963 0.200012 0.400024

Hence if it is raining today, then the chance it is nice in 2 and 7 days is 3/16 and 0.200012 respectively

Limiting Matrices

We now look a bit further at P7:

 ( R N S ) P7 = R 0.400024 0.200012 0.399963 N 0.400024 0.199951 0.400024 S 0.399963 0.200012 0.400024

Note that the chance of being in any of the three states after seven days is almost independent of our initial state. This means the weather in seven days is almost completely independent of what it is today. If we look at Pn for larger values of n, we see the row vectors approach w = {2/5, 1/5, 2/5}.

We see that this works for our example, but we note that it will not work for every matrix. Consider:

 ( ) Q = 0 1 1 0

This matrix describes a two-state system in which we switch states at each time step. Thus Q2 = I, Q3 = Q, Q4 = I and so on.

We say a chain is ergodic if it is possible to move from any one state to any other, not necessarily in one move. Thus both P and Q represent ergodic chains. We say a chain is regular if some power of the transition matrix has all positive entries. While P is such that p22 = 0, we see that P2 has all nonzero entries, thus P is a regular chain. We also see that Qn will always have zeros, so Q is not a regular chain. In general every regular chain is ergodic, but not every ergodic chain is regular.

If our transition matrix, A, is a regular chain, then we have the following property: As n --> oo , the matrix An approaches a limiting matrix A oo , where Ao o consists of n copies of the same row vector. This matrix has different names in different texts, two common names are the “limit matrix”, or the matrix of “limiting probabilities.”

Classification of States

Up to this point our only examples have been ergodic chains. Now we consider the following transition matrix:

 ( ) 1 0 0 0 0 1/2 0 1/2 0 0 P = 0 1/2 0 1/2 0 0 0 1/2 0 1/2 0 0 0 0 1

Note that if we start in state S1 or S5, we can’t leave these states. Similarly, if we start in any other state and we “arrive” in states S1 or S5 at time N, then we remain in those states for all steps n > N. We call these absorbing states, that is, states which once entered are never left.

The transition matrix P represents the random walk on a 5-block street. We assume our random walker is out late at night and has lost all sense of direction on a straight street, he walks one block, stops, looks around, and can’t remember whether he arrived from the left, or the right. Thus he chooses a direction at random, and walks one more block, and repeats the process. He stops when he reaches S0, his house, or S4, a friend’s house. Here we number the states S0,S1,...,S4 in order to make some things a little nicer later, but in general, how we choose to label the states has no effect on the result. In this sense see that if we start at any non-absorbing state, he will eventually arrive at either his house, or his friend’s, and will thus end his wandering.

We note that all matrices with absorbing states are not ergodic, thus R is not ergodic. Therefore we cannot use the result above to get a “limit” matrix, so we must introduce the canonical form. Given a transition matrix with u absorbing states and v non-absorbing states, P, then our matrix has the canonical form:

 ( | ) P = -I-|-0-- R |Q

Where I is the u × u identity matrix, and 0 is the u × v matrix of zeros. Our random walk matrix from above has the canonical form:

 ( ) 1 0 0 0 0 0 1 0 0 0 P = 1/2 0 0 1/2 0 0 0 1/2 0 1/2 0 1/2 0 1/2 0

The matrix N = (I - Q)-1 is called the fundamental matrix. The entries of the fundamental matrix have the following interpretation, Nij is the expected number of times the walker is expected to be at state Sj before absorption, given he started in state Si. So in our random walk example:

 ( ) -1 3/2 1 1/2 N = (I - Q) = 1 2 1 1/2 1 3/2

So letting 1 be the vector of all ones, we have that t = N1 is the expected number of steps before absorption in one of the absorbing states. We define B = NR, this gives us the absorption probabilities. That is, Bij is the probability that starting in the i-th non-absorbing state, that we end up in the j-th absorbing state. In our example:

 (3/4 1/4) - 1 B = (I- Q) R = 1/2 1/2 1/4 3/4

This allows us to comment about Pn for large values of n. As n --> oo we have that Pn approaches the matrixP oo defined as:

 ( | ) oo -I--|0- P = B |0


Problems to Work for Understanding

  1. Students’ progress through a (two-year) Associate Degree program is not always a 2 year process. A student taking 1st year courses has a 15% chance of dropping out, a 25% chance of repeating the first year, and a 60% chance of moving on to the second year. A student taking 2nd year material has a 10% chance of dropping out, a 20% of repeating the second year, and a 70% chance of graduating.
    1. How many states are in this model?
    2. Which states are absorbing? Which are transient?
    3. What is the chance a student entering the program this year will graduate “on time”? What is the chance they will graduate in 3 years?
    4. What is the chance that a student will graduate eventually? What is the chance the student will drop out?
    5. What is the expected time until a student graduates?

    ( Problem created by LT Grant, 2006.)

  2. Consider the following grid upon which a our random walker is stuck. If she reaches states E and F she will stop, otherwise she will keep walking. At any of the other states, she is equally likely to choose any of the possible neighboring states. A symmetric L-shape graph on 6 points
    1. What is the transition matrix?
    2. What are the absorbing states? The transient states?
    3. What is the canonical form of this matrix? The fundamental form?
    4. Starting at state A , on average, how many steps will our walker take before reaching states E or F?
    5. Starting at state A , what is the probability we will end at the state F ? Starting at D what is the probability we will end at state E ? Discuss these two answers.

    (Problem created by LT Grant, 2006.)

  3. Our walker returns a year later to the same grid, only to find that there now exists a road between B and E .
    1. Without any calculations, how would you expect this to change the absorption probabilities of state B ?
    2. Without any calculations, would you expect the same results as you found in part (e) above?
    3. Find the new transition matrix.
    4. Calculate the actual probabilities for part (a) and (b) and comment on your results.

    (Problem creatd by LT Grant, 2006.)

  4. Determine the transition probability matrix for the following Markov Chain: N black balls and N white balls are placed in two urns so that each contains N balls. At each step one ball is selected at random from each urn and the two balls interchange urns. The state of the system is the number of white balls in the first urn. What are the absorbing states, transient states, recurrent states, if any? Is this Markov chain ergodic? (Adapted from Karlin and Taylor, A First Course in Stochastic Processes, second edition, page 73, problem 1(b).)
  5. Determine the transition probability matrix for the following Markov Chain: Consider two urns A and B with a total of N balls. An experiment is performed in which a ball is selected at random (all selections equally likely at time n, n = 1, 2,... from among the totality of N balls. Then an urn is selected at random, urn A with probability p and urn B with probability q = 1 - p and the ball previously drawn is placed in this urn. The state of the system is the number of white balls in the urn A. What are the absorbing states, transient states, recurrent states, if any? Is this Markov chain ergodic? (Adapted from Karlin and Taylor, A First Course in Stochastic Processes, second edition, page 74, problem 2(a).)
  6. Determine the transition probability matrix for the following Markov Chain: Consider two urns A and B with a total of N balls. An experiment is performed in which an urn is selected at random in proportion to its contents (i.e. if urn A has exactly k balls, then it is selected with probability k/N. The a ball is selected from A with probability p and from urn B with probability q = 1 - p and placed in the previously chosen urn. The state of the system is the number of white balls in the urn A. What are the absorbing states, transient states, recurrent states, if any? Is this Markov chain ergodic? (Adapted from Karlin and Taylor, A First Course in Stochastic Processes, second edition, page 74, problem 2(b).)


Reading Suggestion:

  1. A First Course in Stochastic Processes secodn edition, S. Kalin and H. Taylor, Academic Press, 1975. Chapter 2.


Outside Readings and Links:

  1. Introduction to Probability, Grinstead and Snell. Chapter 11 - Markov Chains -
    http://tinyurl.com/qw6sa


[an error occurred while processing this directive] [an error occurred while processing this directive]

Last modified: [an error occurred while processing this directive]