Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar

__________________________________________________________________________

Ruin Probabilities

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________ ### Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________ ### Section Starter Question

What is the solution of the recurrence equation ${x}_{n}=a{x}_{n-1}$ where $a$ is a constant? What kind of a function is the solution? What more, if anything, is necessary to obtain a complete solution?

_______________________________________________________________________________________________ ### Key Concepts

1. The probabilities, interpretation, and consequences of the “gambler’s ruin”.
2. The gambler’s ruin is a ﬁrst introduction to further important ideas in probability such as ﬁrst passage times, Markov processes, and martingales.
3. The gambler’s ruin probabilities are useful for analyzing random walk on a circle.

__________________________________________________________________________ ### Vocabulary

1. Classical Ruin Problem “Consider the familiar gambler who wins or loses a dollar with probabilities $p$ and $q=1-p$, respectively playing against an inﬁnitely rich adversary who is always willing to play although the gambler has the privilege of stopping at his pleasure. The gambler adopts the strategy of playing until he either loses his capital (“is ruined”) or increases it to $a$ (with a net gain of $a-{T}_{0}$.) We are interested in the probability of the gambler’s ruin and the probability distribution of the duration of the game. This is the classical ruin problem.”. (From W. Feller, in Introduction to Probability Theory and Applications, Volume I, Chapter XIV, page 342. )
2. Another common interpretation of this probability game is to imagine it as a random walk.
3. We can interpret the fortune in the gambler’s coin-tossing game as a Markov process.That is, at successive times the process is in various states. The probability of passing from one state at the current time $t$ to another state at time $t+1$ is completely determined by the present state.
4. We can also note the fair coin-tossing game with $p=1∕2=q$ is a martingale, the expected value of the process at the next step is the current value.

__________________________________________________________________________ ### Mathematical Ideas

#### Understanding a Stochastic Process

Consider a sequence of Bernoulli random variables, ${Y}_{1},{Y}_{2},{Y}_{3},\dots$. Start with a given initial value ${Y}_{0}>0$. For $i\ge 1$, deﬁne the random variables ${Y}_{i}=+1$ with probability $p$ and ${Y}_{i}=-1$ with probability $q=1-p$. Deﬁne the sequence of sums ${T}_{n}={\sum }_{i=0}^{n}{Y}_{i}$. Consider the probability that the process ${T}_{1},{T}_{2},{T}_{3},\dots$ will achieve the value $0$ before it achieves the value $a$. This is a special case of a larger class of probability problems called ﬁrst-passage probabilities.

Consider a gambler who wins or loses a dollar on each turn of a game with probabilities $p$ and $q=1-p$ respectively. Let his initial capital be ${T}_{0}>0$. The game continues until the gambler’s capital either reduces to $0$ or increases to $a$. Let ${q}_{{T}_{0}}$ be the probability of the gambler’s ultimate ruin and ${p}_{{T}_{0}}$ the probability of his winning. The next section shows that

${p}_{{T}_{0}}+{q}_{{T}_{0}}=1$

so that an unending game has probability $0$.

Theorem 1. The probability of the gambler’s ruin is

${q}_{{T}_{0}}=\frac{{\left(q∕p\right)}^{a}-{\left(q∕p\right)}^{{T}_{0}}}{{\left(q∕p\right)}^{a}-1}$

if $p\ne q$ and

${q}_{{T}_{0}}=1-{T}_{0}∕a$

if $p=q=1∕2$.

Proof. The proof uses a ﬁrst step analysis considering how the probabilities change after one step or trial. After the ﬁrst trial the gambler’s fortune is either ${T}_{0}-1$ or ${T}_{0}+1$ and therefore

 ${q}_{{T}_{0}}=p{q}_{{T}_{0}+1}+q{q}_{{T}_{0}-1}$ (1)

provided $1<{T}_{0}. For ${T}_{0}=1$, the ﬁrst trial may lead to ruin, and

${q}_{1}=p{q}_{2}+q$

replaces (1). Similarly, for ${T}_{0}=a-1$ the ﬁrst trial may result in victory, and therefore

${q}_{a-1}=q{q}_{a-2}.$

To unify our equations, deﬁne as a natural assumption that ${q}_{0}=1$, and ${q}_{a}=0$. Then the probability of ruin satisﬁes (1) for ${T}_{0}=1,2,\dots ,a-1$. This deﬁnes a set of $a-1$ diﬀerence equations, with boundary conditions at $0$ and $a$. Solve the system of diﬀerence equations to obtain the desired probability ${q}_{{T}_{0}}$ for any value of ${T}_{0}$.

Rewrite the diﬀerence equations as

$p{q}_{{T}_{0}}+q{q}_{{T}_{0}}=p{q}_{{T}_{0}+1}+q{q}_{{T}_{0}-1}.$

Rearrange and factor to obtain

$\frac{{q}_{{T}_{0}+1}-{q}_{{T}_{0}}}{{q}_{{T}_{0}}-{q}_{{T}_{0}-1}}=\frac{q}{p}.$

This says the ratio of successive diﬀerences of ${q}_{{T}_{0}}$ is constant. This suggests that ${q}_{{T}_{0}}$ is a power function,

${q}_{{T}_{0}}={\lambda }^{{T}_{0}}$

since power functions have this property.

First take the case when $p\ne q$. Then based on the guess above (or also on standard theory for linear diﬀerence equations), try a solution of the form ${q}_{{T}_{0}}={\lambda }^{{T}_{0}}$. That is

${\lambda }^{{T}_{0}}=p{\lambda }^{{T}_{0}+1}+q{\lambda }^{{T}_{0}-1}.$

This reduces to

$p{\lambda }^{2}-\lambda +q=0.$

Since $p+q=1$, this factors as

$\left(p\lambda -q\right)\left(\lambda -1\right)=0,$

so the solutions are $\lambda =q∕p$, and $\lambda =1$. (One could also use the quadratic formula to obtain the same values.) Again by the standard theory of linear diﬀerence equations, the general solution is

 ${q}_{{T}_{0}}=A\cdot 1+B\cdot {\left(q∕p\right)}^{{T}_{0}}$ (2)

for some constants $A$, and $B$.

Determine the constants by using the boundary conditions:

$\begin{array}{llll}\hfill {q}_{0}& =A+B=1\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {q}_{a}& =A+B{\left(q∕p\right)}^{a}=0.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Solving, substituting, and simplifying:

${q}_{{T}_{0}}=\frac{{\left(q∕p\right)}^{a}-{\left(q∕p\right)}^{{T}_{0}}}{{\left(q∕p\right)}^{a}-1}.$

(Check for yourself that with this expression $0\le {q}_{{T}_{0}}\le 1$ as it should be a for a probability.)

To show that the solution is unique, suppose ${r}_{{T}_{0}}$ is another solution of the diﬀerence equations. Given an arbitrary solution of (1), determine the two constants $A$ and $B$ so that (2) agrees with ${r}_{{T}_{0}}$ at ${T}_{0}=0$ and ${T}_{0}=a$. From these two values, ﬁnd all other values by substituting in (1) successively ${T}_{0}=1,2,3,\dots$ Therefore, two solutions which agree for ${T}_{0}=0$ and ${T}_{0}=1$ are identical, hence every solution is of the form (2).

The solution breaks down if $p=q=1∕2$, since then the diﬀerence equation does not have two linearly independent solutions (the solution $1$ repeats twice). Instead, borrow a result from diﬀerential equations (from the variation-of-parameters/reduction-of-order set of ideas used to derive a complete linearly independent set of solutions.) Certainly, $1$ is still a solution of the diﬀerence equation (1). A second linearly independent solution is ${T}_{0}$, and the general solution is ${q}_{{T}_{0}}=A+B{T}_{0}$. To satisfy the boundary conditions, we must put $A=1$, and $A+Ba=0$, hence ${q}_{{T}_{0}}=1-{T}_{0}∕a$. □

Consider a symmetric interpretation of this gambling game. Instead of a single gambler playing at a casino, trying to make a goal $a$ before losing everything, consider two gamblers Alice and Bill playing against each other. Let Alice’s initial capital be ${T}_{0}$ and let her play against adversary Bill with initial capital $a-{T}_{0}$ so that their combined capital is $a$. The game continues until one gambler’s capital either reduces to zero or increases to $a$, that is, until the ruin of one of the two players.

Corollary 1. ${p}_{{T}_{0}}+{q}_{{T}_{0}}=1$

Proof. The probability ${p}_{{T}_{0}}$ of Alice’s winning the game equals the probability of Bill’s ruin. Bill’s ruin (and Alice’s victory) is therefore obtained from our ruin formulas on replacing $p$, $q$, and ${T}_{0}$ by $q$, $p$, and $a-{T}_{0}$ respectively. That is, from our formula (for $p\ne q$) the probability of Alice’s ruin is

${q}_{{T}_{0}}=\frac{{\left(q∕p\right)}^{a}-{\left(q∕p\right)}^{{T}_{0}}}{{\left(q∕p\right)}^{a}-1}$

and the probability of Bill’s ruin is

${p}_{{T}_{0}}=\frac{{\left(p∕q\right)}^{a}-{\left(p∕q\right)}^{a-{T}_{0}}}{{\left(p∕q\right)}^{a}-1}.$

Then add these together, and after some algebra, the total is $1$.

For $p=1∕2=q$, the proof is simpler, since then ${p}_{{T}_{0}}=1-\left(a-{T}_{0}\right)∕a$, and ${q}_{{T}_{0}}=1-{T}_{0}∕a$, and ${p}_{{T}_{0}}+{q}_{{T}_{0}}=1$ easily. □

Corollary 2. The expected gain is $𝔼\left[G\right]=\left(1-{q}_{{T}_{0}}\right)a-{T}_{0}$.

Proof. In the game, the gambler’s last gain (or loss) is a Bernoulli (two-valued) random variable, $G$, where $G$ assumes the value $-{T}_{0}$ with probability ${q}_{{T}_{0}}$, and assumes the value $a-{T}_{0}$ with probability ${p}_{{T}_{0}}$. Thus the expected value is

$\begin{array}{llll}\hfill 𝔼\left[G\right]& =\left(a-{T}_{0}\right){p}_{{T}_{0}}+\left(-{T}_{0}\right){q}_{{T}_{0}}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={p}_{{T}_{0}}a-{T}_{0}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left(1-{q}_{{T}_{0}}\right)a-{T}_{0}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Now notice that if $q=1∕2=p$ in a fair game, then $𝔼\left[G\right]=\left(1-\left(1-{T}_{0}∕a\right)\right)\cdot a-{T}_{0}=0$. That is, a fair game in the short run (one trial) is a fair game in the long run (expected value). However, if $p<1∕2, so the game is not fair then the expectation formula says

$\begin{array}{llll}\hfill 𝔼\left[G\right]& =\left(1-\frac{{\left(q∕p\right)}^{a}-{\left(q∕p\right)}^{{T}_{0}}}{{\left(q∕p\right)}^{a}-1}\right)a-{T}_{0}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{{\left(q∕p\right)}^{{T}_{0}}-1}{{\left(q∕p\right)}^{a}-1}a-{T}_{0}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left(\frac{\left[{\left(q∕p\right)}^{{T}_{0}}-1\right]a}{\left[{\left(q∕p\right)}^{a}-1\right]{T}_{0}}-1\right){T}_{0}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

The sequence $\left[{\left(q∕p\right)}^{n}-1\right]∕n$ is an increasing sequence, so

$\left(\frac{\left[{\left(q∕p\right)}^{{T}_{0}}-1\right]a}{\left[{\left(q∕p\right)}^{a}-1\right]{T}_{0}}-1\right)<0.$

This shows that an unfair game in the short run (one trial) is an unfair game in the long run.

Corollary 3. The probability of ultimate ruin of a gambler with initial capital ${T}_{0}$ playing against an inﬁnitely rich adversary is

${q}_{{T}_{0}}=1,\phantom{\rule{2em}{0ex}}p\le q$

and

${q}_{{T}_{0}}={\left(q∕p\right)}^{{T}_{0}},\phantom{\rule{2em}{0ex}}p>q.$

Proof. Let $a\to \infty$ in the formulas. (Check it out!) □

Remark. This corollary says that the probability of “breaking the bank at Monte Carlo” as in the movies is zero, at least for simple games.

#### Some Calculations for Illustration

 $p$ $q$ ${T}_{0}$ $a$ Probability Probability Expected of Ruin of Victory Gain 0.5 0.5 9 10 0.1000 0.9000 0 0.5 0.5 90 100 0.1000 0.9000 0 0.5 0.5 900 1,000 0.1000 0.9000 0 0.5 0.5 950 1,000 0.0500 0.9500 0 0.5 0.5 8,000 10,000 0.2000 0.8000 0 0.45 0.55 9 10 0.2101 0.7899 -1 0.45 0.55 90 100 0.8656 0.1344 -77 0.45 0.55 99 100 0.1818 0.8182 -17 0.4 0.6 90 100 0.9827 0.0173 -88 0.4 0.6 99 100 0.3333 0.6667 -32

#### Why do we hear about people who actually win?

We often hear from people who consistently win at the casino. How can this be in the face of the theorems above?

A simple illustration makes clear how this is possible. Assume for convenience a gambler who repeatedly visits the casino, each time with a certain amount of capital. His goal is to win 1/9 of his capital. That is, in units of his initial capital ${T}_{0}=9$, and $a=10$. Assume too that the casino is fair so that $p=1∕2=q$, then the probability of ruin in any one visit is:

${q}_{{T}_{0}}=1-9∕10=1∕10.$

That is, if the working capital is much greater than the amount required for victory, then the probability of ruin is reasonably small.

Then the probability of an unbroken string of ten successes in ten years is:

${\left(1-1∕10\right)}^{10}\approx exp\left(-1\right)\approx 0.37$

This much success is reasonable, but simple psychology would suggest the gambler would boast about his skill instead of crediting it to luck. Moreover, simple psychology suggests the gambler would also blame one failure on oversight, momentary distraction, or even cheating by the casino!

#### Another Interpretation as a Random Walk

Another common interpretation of this probability game is to imagine it as a random walk. That is, imagine an individual on a number line, starting at some position ${T}_{0}$. The person takes a step to the right to ${T}_{0}+1$ with probability $p$ and takes a step to the left to ${T}_{0}-1$ with probability $q$ and continues this random process. Then instead of the total fortune at any time, consider the geometric position on the line at any time. Instead of reaching ﬁnancial ruin or attaining a ﬁnancial goal, instead consider reaching or passing a certain position. For example, Corollary 3 says that if $p\le q$, then the probability of visiting the origin before going to inﬁnity is $1$. The two interpretations are equivalent and both are useful depending on context. The problems below use the random walk interpretation, because they are more naturally posed in terms of reaching or passing certain points on the number line.

#### Using ruin probabilities for random walk on a circle

The following unusual example appeared as a weekly puzzle at the website FiveThirtyEight, The Riddler, August 4, 2017..

A class of $30$ children is playing a game where they all stand in a circle along with their teacher. The teacher is holding two things: a coin and a potato. The game progresses like this: The teacher tosses the coin. Whoever holds the potato passes it to the left if the coin comes up heads and to the right if the coin comes up tails. The game ends when every child except one has held the potato, and the one who hasn’t is the winner. How do a child’s chances of winning change depending on where they are in the circle? In other words, what is each child’s win probability? Figure 1: A simulation of the potato game with the teacher at position $0$, $30$ students around the circle and $50$ passes of the potato.

Before solving the original problem, ﬁrst consider a smaller example. Consider a random walk on a circle, labeled at $4$ equally spaced points $0$ through $3$, the walk starting at $0$. Equivalently, consider the random walk on the group ${ℤ}_{4}$ and use all possible representations for the values of ${ℤ}_{4}$, sometimes from $0$ to $3$ as usual, sometimes from $-2$ to $1$ and sometimes from $-1$ to $2$ as convenient.

Consider the potato game on this $4$-vertex circle, and speciﬁcally consider the probability that all points on the circle are covered except for the vertex $2$, that is, in the potato game with teacher $0$ and $3$ students, the student $2$ (equivalently $-2$) is the winner.

Unfold the circle and use the theorem on ruin probabilities for a fair walk starting at $0$ and terminating with ruin at $-a$ or victory at $b$

• Consider the event ${E}_{+}$ that the walk reaches the value $+2$ before the walk reaches $-1$. This means the walk does get as far as $2$ clockwise on the clock in some time, but not as far as $-1$ counterclockwise. The probability of this event is $1∕3$.
• Consider the event ${E}_{-}$ that the walk reaches the value $-2$ counterclockwise before the walk reaches $+1$ clockwise. The probability of this event is $1∕3$.
• These events are disjoint: If a sample point is in ${E}_{+}$ it must touch the vertex $1$ and hence cannot be in ${E}_{-}$. If the walk is in ${E}_{-}$ it must touch $-1$ so it cannot be in ${E}_{+}$.
• The complement ${E}^{\prime }$ of ${E}_{+}\cup {E}_{-}$ is the event that the walk does not reach $2$ before something else happens. That is, the walk goes everywhere before reaching $2$. Then the probability of ${E}^{\prime }$ is $1-\left(1∕3+1∕3\right)=1∕3$.

Now consider the original potato game with teacher $0$ and $30$ children and speciﬁcally consider the probability that all children are covered except for child $17$, for example. That is, in the potato game with teacher $0$ and $30$ students, student $17$ (equivalently $-14$) is the winner. The example uses student $17$ because this position is not symmetric with respect to the teacher.

• Consider the event ${E}_{+}$ that the walk reaches the value $+17$ before the walk reaches $-13$. This means the walk does get as far as $17$ clockwise on the clock in some time, but not as far as $-13\left(=18\right)$ counterclockwise. The probability of this event is $13∕30$.
• Consider the event ${E}_{-}$ that the walk reaches the value $-14$ counterclockwise before the walk reaches $+16$ clockwise. The probability of this event is $16∕30$.
• These events are disjoint: If a sample path is in ${E}_{+}$ it must touch the vertex $16$ in order to reach $17$ and hence cannot be in ${E}_{-}$. If the walk is in ${E}_{-}$ it must touch $-13$ to reach $-14$ so it cannot be in ${E}_{+}$.
• The complement ${E}^{\prime }$ of ${E}_{+}\cup {E}_{-}$ is the event that the walk does not reach $17\left(=-14\right)$ before something else happens. That is, the walk goes everywhere before reaching $17$. Then the probability of ${E}^{\prime }$ is $1-\left(13∕30+16∕30\right)=1∕30$.

Finally generalize the potato game to have $N$ children. Consider the probability that all children have held the potato except for child $b$, where $b\le N$. Let $a=N-b\ge 0$ When necessary, equivalently label students around the circle with alternative values from ${ℤ}_{N+1}$. Then in the potato game with teacher $0$ and $N$ students, student $b$, or equivalently $-a$ is the winner.

• Consider the event ${E}_{+}$ that the walk reaches the value $b$ before the walk reaches $-a$. This means the walk does get as far as $b$ clockwise on the clock in some time, but not as far as $-a$ counterclockwise. The probability of this event is $a∕\left(a+b\right)$.
• Consider the event ${E}_{-}$ that the walk reaches the value $-a$ counterclockwise before the walk reaches $+b-1$ clockwise. The probability of this event is $\left(b-1\right)∕\left(a+b\right)$.
• These events are disjoint: If a sample path is in ${E}_{+}$ it must touch the vertex $b-1$ in order to reach $b$ and hence cannot be in ${E}_{-}$. If the walk is in ${E}_{-}$ it must touch $-a$ to reach $-a$ so it cannot be in ${E}_{+}$.
• The complement ${E}^{\prime }$ of ${E}_{+}\cup {E}_{-}$ is the event that the walk does not reach $b\left(=-14\right)$ before something else happens. That is, the walk goes everywhere before reaching $b$. Then the probability of ${E}^{\prime }$ is $1-\left(b-1\right)∕\left(a+b\right)+a∕\left(a+b\right)=1∕\left(a+b\right)=1∕N$.

This result of uniform probability is somewhat surprising. For fair random walk on the line, the probability of being at a particular position after $n$ steps has a binomial probability, with about $68%$ of the probability being in the interval $\left[-\sqrt{n},\sqrt{n}\right]$, and $95%$ of the probability in the interval $\left[-\sqrt{n},\sqrt{n}\right]$, and the probability drops oﬀ rapidly. Then wrapping the probabilities onto the circle, it would seem that the probabilities of the potato (or the random walk) passing through the symmetrical adjacent positions $1$ and $30$ should be relatively large and probability of being at positions $15$ and $16$ opposite the teacher should be relatively small. At ﬁrst glance, it would seem that students $1$ and $30$ adjacent to the teacher will seldom win and that positions $15$ or $16$ should have the highest probability of winning. However, the game stops at a random time, after visiting all but one position, not a ﬁxed time. The next section will show that the expected time is $435$ steps. After $400$ steps, the standard deviation of the fair random walk is $20$, and wrapping the random walk onto the circle will cover the circle. Some sample paths will stay on one side of the teacher and then positions $1$ or $30$ can still win. Some sample paths will oscillate back and forth across the teacher’s position, as in the ﬁgure, and positions $15$ or $16$ can win.

#### The interpretation as Markov Processes and Martingales

The fortune in the coin-tossing game is the ﬁrst and simplest encounter with two of the most important ideas in modern probability theory.

We can interpret the fortune in our gambler’s coin-tossing game as a Markov process. That is, at successive times the process is in various states. The states are the values of the fortune. The probability of passing from one state at the current time $t$ to another state at time $t+1$ is completely determined by the present state. That is, for our coin-tossing game

The most important property of a Markov process is that the probability of being in the next state is completely determined by the current state and not the history of how the process arrived at the current state. In that sense, we often say that a Markov process is memory-less.

We can also note the fair coin-tossing game with $p=1∕2=q$ is a martingale. That is, the expected value of the process at the next step is the current value. Using expectation for estimation, the best estimate we have of the gambler’s fortune at the next step is the current fortune:

$𝔼\left[{T}_{n+1}|{T}_{n}=x\right]=\left(x+1\right)\left(1∕2\right)+\left(x-1\right)\left(1∕2\right)=x.$

This characterizes a fair game, after the next step, one can neither expect to be richer or poorer. Note that the coin-tossing games with $p\ne q$ do not have this property.

#### Sources

This section is adapted from W. Feller, in Introduction to Probability Theory and Applications, Volume I, Chapter XIV, page 342, . Some material is adapted from  and . Steele has an excellent discussion at about the same level as here, but with a slightly more rigorous approach to solving the diﬀerence equations. He also gives more information about the fact that the duration is almost surely ﬁnite, showing that all moments of the duration are ﬁnite. Karlin and Taylor give a treatment of the ruin problem by direct application of Markov chain analysis, which is not essentially diﬀerent, but points to greater generality.

_______________________________________________________________________________________________ ### Algorithms, Scripts, Simulations

#### Algorithm

The goal is to simulate the probability function for ruin with a given starting value. First set the probability $p$, number of Bernoulli trials $n$, and number of experiments $k$. Set the ruin and victory values $r$ and $v$, the boundaries for the random walk. For each starting value from ruin to victory, ﬁll an $n×k$ matrix with the Bernoulli random variables. Languages with multi-dimensional arrays keeps the data in a three-dimensional array of size $n×k×\left(v-r+1\right)$. Cumulatively sum the Bernoulli random variables to create the fortune or random walk. For each starting value, for each random walk or fortune path, ﬁnd the step where ruin or victory is encountered. For each starting value, ﬁnd the proportion of fortunes encountering ruin. Finally, ﬁnd a least squares linear ﬁt of the ruin probabilities as a function of the starting value.

#### Scripts

Geogebra
Geogebra script for ruin probabilities.
Geogebra script for ruin probabilities.
R
1
2p <- 0.5
3n <- 150
4k <- 60
5
6victory <- 10
7# top boundary for random walk
8ruin <- -10
9# bottom boundary for random walk
10interval <- victory - ruin + 1
11
12winLose <- 2 * (array( 0+(runif(n*k*interval) <= p), dim=c(n,k,
13interval))) - 1
14# 0+ coerces Boolean to numeric
15totals <- apply( winLose, 2:3, cumsum)
16# the second argument ‘‘2:3’’ means column-wise in each panel
17start <- outer( array(1, dim=c(n+1,k)), ruin:victory, "*")
18
19paths <- array( 0 , dim=c(n+1, k, interval) )
20paths[2:(n+1), 1:k, 1:interval] <- totals
21paths <- paths + start
22
23hitVictory <- apply(paths, 2:3, (function(x)match(victory,x, nomatch=n+2)));
24hitRuin    <- apply(paths, 2:3, (function(x)match(ruin,   x, nomatch=n+2)));
25# the second argument ‘‘2:3’’ means column-wise in each panel
26# If no ruin or victory on a walk, nomatch=n+2 sets the hitting
27# time to be two more than the number of steps, one more than
28# the column length.  Without the nomatch option, get NA which
29# works poorly with the comparison hitRuin < hitVictory next.
30
31probRuinBeforeVictory <-
32     apply( (hitRuin < hitVictory), 2,
33   (function(x)length((which(x,arr.ind=FALSE)))) )/k
34
35startValues <- (ruin:victory);
36ruinFunction <- lm(probRuinBeforeVictory ~ startValues)
37# lm is the R function for linear models, a more general view of
38# least squares linear fitting for response ~ terms
39cat(sprintf("Ruin function Intercept: %f \n", coefficients(ruinFunction) ))
40cat(sprintf("Ruin function Slope: %f \n", coefficients(ruinFunction) ))
41
42plot(startValues, probRuinBeforeVictory);
43abline(ruinFunction)
44
45
1cAp1x1-1600046:
Octave
1p = 0.5;
2n = 150;
3k = 60;
4
5victory =  10;
6# top boundary for random walk
7ruin    = -10;
8# bottom boundary for random walk
9
10probRuinBeforeVictory = zeros(1, victory-ruin+1);
11for start = ruin:victory
12
13    winLose = 2 * (rand(n,k) <= p) - 1;
14    # -1 for Tails, 1 for Heads
15    totals = cumsum(winLose);
16    # -n..n (every other integer) binomial rv sample
17
18    paths = [zeros(1,k); totals] + start;
19    victoryOrRuin = zeros(1,k);
20    for j = 1:k
21  hitVictory = find(paths(:,j) >= victory);
22  hitRuin  = find(paths(:,j) <= ruin);
23  if ( !rows(hitVictory) && !rows(hitRuin) )
24     # no victory, no ruin
25     # do nothing
26  elseif ( rows(hitVictory) && !rows(hitRuin) )
27     # victory, no ruin
28     victoryOrRuin(j) = hitVictory(1);
29  elseif ( !rows(hitVictory) && rows(hitRuin) )
30     # no victory, but hit ruin
31     victoryOrRuin(j) = -hitRuin(1);
32  else # ( rows(hitvictory) && rows(hitruin) )
33     # victory and ruin
34     if ( hitVictory(1) < hitRuin(1) )
35       victoryOrRuin(j) = hitVictory(1);
36       # code hitting victory
37     else
38       victoryOrRuin(j) = -hitRuin(1);
39       # code hitting ruin as negative
40     endif
41  endif
42    endfor
43
44    probRuinBeforeVictory(start + (-ruin+1)) = sum( victoryOrRuin < 0 )/k;
45#   probRuinBeforeVictory
46endfor
47
48function coeff = least_square (x,y)
49  n = length(x);
50  A = [x ones(n,1)];
51  coeff = A\y;
52  plot(x,y,x);
53  hold on
54  interv = [min(x) max(x)];
55  plot(interv,coeff(1)*interv+coeff(2));
56end
57
58rf = least_square(transpose( ruin : victory ), transpose(probRuinBeforeVictory));
59disp("Ruin function Intercept:"), disp(rf(2))
60disp("Ruin function Slope:"), disp(rf(1))
61hold off
62
63
1cAp2x1-1600064:
Perl
1use PDL::NiceSlice;
2
3$p = 0.5; 4$n        = 150;
5$k = 60; 6$victory  = 10;
7$ruin = -10; 8$interval = $victory -$ruin + 1;
9$winLose = 2 * ( random($k, $n,$interval ) <= $p ) - 1; 10$totals   = ( cumusumover $winLose->xchg( 0, 1 ) )->transpose; 11$start    = zeroes( $k,$n + 1, $interval )->zlinvals($ruin, $victory ); 12 13$paths = zeroes( $k,$n + 1, $interval ); 14 15# use PDL:NiceSlice on next line 16$paths ( 0 : ( $k - 1 ), 1 :$n, 0 : ( $interval - 1 ) ) .=$totals;
17
18# Note the use of the concat operator here.
19$paths =$paths + $start; 20$hitVictory = $paths->setbadif($paths < $victory ); 21$hitRuin    = $paths->setbadif($paths > $ruin ); 22 23$victoryIndex =
24    ( $hitVictory ( ,, : )->xchg( 0, 1 )->minimum_ind ) 25 ->inplace->setbadtoval($n + 1 );
26$ruinIndex = 27 ($hitRuin ( ,, : )->xchg( 0, 1 )->maximum_ind )
28    ->inplace->setbadtoval( $n + 1 ); 29 30$probRuinBeforeVictory = sumover( float( $ruinIndex <$victoryIndex ) ) / $k; 31 32use PDL::Fit::Linfit; 33$x = zeroes($interval)->xlinvals($ruin, $victory ); 34$fitFuncs = cat ones($interval),$x;
35( $ruinFunction,$coeffs ) = linfit1d $probRuinBeforeVictory,$fitFuncs;
36print "Ruin function Intercept:", $coeffs (0), "\n"; 37print "Ruin function Slope:",$coeffs (1), "\n";
38
39
1cAp3x1-1600040:
SciPy
1import scipy
2
3p = 0.5
4n = 150
5k = 60
6victory = 10;
7ruin = -10;
8interval = victory - ruin + 1;
9
10winLose = 2*( scipy.random.random((n,k,interval)) <= p ) - 1
11totals = scipy.cumsum(winLose, axis = 0)
12
13start = scipy.multiply.outer( scipy.ones((n+1,k), dtype=int), scipy.arange(ruin, victory+1, dtype=int))
14paths = scipy.zeros((n+1,k,interval), dtype=int)
15paths[ 1:n+1, :,:] = totals
16paths = paths + start
17
18def match(a,b,nomatch=None):
19    return  b.index(a) if a in b else nomatch
20# arguments: a is a scalar, b is a python list, value of nomatch is scalar
21# returns the position of first match of its first argument in its second argument
22# but if a is not there, returns the value nomatch
23# modeled on the R function "match", but with less generality
24
25hitVictory = scipy.apply_along_axis(lambda x:( match(victory,x.tolist(),nomatch=n+2)), 0, paths)
26hitRuin = scipy.apply_along_axis(lambda x:match(ruin,x.tolist(),nomatch=n+2), 0, paths)
27# If no ruin or victory on a walk, nomatch=n+2 sets the hitting
28# time to be two more than the number of steps, one more than
29# the column length.
30
31probRuinBeforeVictory = scipy.mean( (hitRuin < hitVictory), axis=0)
32# note that you can treat the bools as binary data!
33
34ruinFunction = scipy.polyfit( scipy.arange(ruin, victory+1, dtype=int), probRuinBeforeVictory, 1)
35print "Ruin function Intercept:", ruinFunction;
36print "Ruin function Slope:", ruinFunction;
37# should return a slope near -1/(victory-ruin) and an intercept near 0.5
38
39
1cAp4x1-1600040:

__________________________________________________________________________ ### Problems to Work for Understanding

1. Consider the ruin probabilities ${q}_{{T}_{0}}$ as a function of ${T}_{0}$. What is the domain of ${q}_{{T}_{0}}$ ? What is the range of ${q}_{{T}_{0}}$ ? Explain heuristically why ${q}_{{T}_{0}}$ is decreasing as a function of ${T}_{0}$.
2. Show that power functions have the property that the ratio of successive diﬀerences is constant.
3. Show the sequence $\left[{\left(q∕p\right)}^{n}-1\right]∕n$ is an increasing sequence for $0
4. In a random walk starting at the origin ﬁnd the probability that the point $a>0$ will be reached before the point $-b<0$.
5. James Bond wants to ruin the casino at Monte Carlo by consistently betting 1 Euro on Red at the roulette wheel. The probability of Bond winning at one turn in this game is $18∕38\approx 0.474$. James Bond, being Agent 007, is backed by the full ﬁnancial might of the British Empire, and so can be considered to have unlimited funds. Approximately how much money should the casino have to start with so that Bond has only a “one-in-a-million” chance of ruining the casino?
6. A gambler starts with $2 and wants to win$2 more to get to a total of $4 before being ruined by losing all his money. He plays a coin-ﬂipping game, with a coin that changes with his fortune. 1. If the gambler has$2 he plays with a coin that gives probability $p=1∕2$ of winning a dollar and probability $q=1∕2$ of losing a dollar.
2. If the gambler has $3 he plays with a coin that gives probability $p=1∕4$ of winning a dollar and probability $q=3∕4$ of losing a dollar. 3. If the gambler has$1 he plays with a coin that gives probability $p=3∕4$ of winning a dollar and probability $q=1∕4$ of losing a dollar.

Use “ﬁrst step analysis” to write three equations in three unknowns (with two additional boundary conditions) that give the probability that the gambler will be ruined. Solve the equations to ﬁnd the ruin probability.

7. A gambler plays a coin ﬂipping game in which the probability of winning on a ﬂip is $p=0.4$ and the probability of losing on a ﬂip is $q=1-p=0.6$. The gambler wants to reach the victory level of $16 before being ruined with a fortune of$0. The gambler starts with $8, bets$2 on each ﬂip when the fortune is $6,$8, $10 and bets$4 when the fortune is $4 or$12 Compute the probability of ruin in this game.
8. Prove: In a random walk starting at the origin the probability to reach the point $a>0$ before returning to the origin equals $p\left(1-{q}_{1}\right)$.
9. Prove: In a random walk starting at $a>0$ the probability to reach the origin before returning to the starting point equals $q{q}_{a-1}$.
10. In the simple case $p=1∕2=q$, conclude from the preceding problem: In a random walk starting at the origin, the number of visits to the point $a>0$ that take place before the ﬁrst return to the origin has a geometric distribution with ratio $1-q{q}_{a-1}$. (Why is the condition $q\ge p$ necessary?)
1. Draw a sample path of a random walk (with $p=1∕2=q$) starting from the origin where the walk visits the position $5$ twice before returning to the origin.
2. Using the results from the previous problems, it can be shown with careful but elementary reasoning that the number of times $N$ that a random walk ($p=1∕2=q$) reaches the value $a$ a total of $n$ times before returning to the origin is a geometric random variable with probability
$ℙ\left[N=n\right]={\left(\frac{1}{2a}\right)}^{n}\left(1-\frac{1}{2a}\right).$

Compute the expected number of visits $𝔼\left[N\right]$ to level $a$.

3. Compare the expected number of visits of a random walk (with $p=1∕2=q$) to the value $1,000,000$ before returning to the origin and to the level $10$ before returning to the origin.
11. This problem is adapted from Stochastic Calculus and Financial Applications by J. Michael Steele, Springer, New York, 2001, Chapter 1, Section 1.6, page 9. Information on buy-backs is adapted from investorwords.com. This problem suggests how results on biased random walks can be worked into more realistic models.

Consider a naive model for a stock that has a support level of $20/share because of a corporate buy-back program. (This means the company will buy back stock if shares dip below$20 per share. In the case of stocks, this reduces the number of shares outstanding, giving each remaining shareholder a larger percentage ownership of the company. This is usually considered a sign that the company’s management is optimistic about the future and believes that the current share price is undervalued. Reasons for buy-backs include putting unused cash to use, raising earnings per share, increasing internal control of the company, and obtaining stock for employee stock option plans or pension plans.) Suppose also that the stock price moves randomly with a downward bias when the price is above $20, and randomly with an upward bias when the price is below$20. To make the problem concrete, we let ${S}_{n}$ denote the stock price at time $n$, and we express our stock support hypothesis by the assumptions that

$\begin{array}{rcll}ℙ\left[{S}_{n+1}=21|{S}_{n}=20\right]& =& 9∕10& \text{}\\ ℙ\left[{S}_{n+1}=19|{S}_{n}=20\right]& =& 1∕10& \text{}\end{array}$

We then reﬂect the downward bias at price levels above $20 by requiring that for $k>20$: $\begin{array}{rcll}ℙ\left[{S}_{n+1}=k+1|{S}_{n}=k\right]& =& 1∕3& \text{}\\ ℙ\left[{S}_{n+1}=k-1|{S}_{n}=k\right]& =& 2∕3.& \text{}\end{array}$ We then reﬂect the upward bias at price levels below$20 by requiring that for $k<20$:

$\begin{array}{rcll}ℙ\left[{S}_{n+1}=k+1|{S}_{n}=k\right]& =& 2∕3& \text{}\\ ℙ\left[{S}_{n+1}=k-1|{S}_{n}=k\right]& =& 1∕3& \text{}\end{array}$

Using the methods of “single-step analysis” calculate the expected time for the stock to fall from $25 through the support level all the way down to$18. (Because of the varying parameters there is no way to solve this problem using formulas. Instead you will have to go back to basic principles of single-step or ﬁrst-step analysis to solve the problem.)

12. Modify the ruin probability scripts to perform simulations of the ruin calculations in the table in the section Some Calculations for Illustration and compare the results.
13. Do some simulations of the coin-ﬂipping game, varying $p$ and the start value. How does the value of $p$ aﬀect the experimental probability of victory and ruin?
14. Modify the simulations by changing the value of $p$ and comparing the experimental results for each starting value to the theoretical ruin function.

__________________________________________________________________________ ### References

   William Feller. An Introduction to Probability Theory and Its Applications, Volume I, volume I. John Wiley and Sons, third edition, 1973. QA 273 F3712.

   S. Karlin and H. Taylor. A Second Course in Stochastic Processes. Academic Press, 1981.

   J. Michael Steele. Stochastic Calculus and Financial Applications. Springer-Verlag, 2001. QA 274.2 S 74.

__________________________________________________________________________ 1. Virtual Labs in Probability. Games of Chance. Scroll down and select the Red and Black Experiment (marked in red in the Applets Section. Read the description since the scenario is slightly diﬀerent but equivalent to the description above.)
2. University of California, San Diego, Department of Mathematics, A.M. Garsia. A java applet that simulates how long it takes for a gambler to go broke. You can control how much money you and the casino start with, the house odds, and the maximum number of games. Results are a graph and a summary table. Submitted by Matt Odell, September 8, 2003.
3. Eric Weisstein, World of Mathematics. A good description of gambler’s ruin, martingale and many other coin tossing and dice problems and various probability problems Submitted by Yogesh Makkar, September 16th 2003.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.