Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar

__________________________________________________________________________

Duration of the Gambler’s Ruin

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

### Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

### Section Starter Question

Consider a gambler who wins or loses a dollar on each turn of a fair game with probabilities $p=1∕2$ and $q=1∕2$ respectively. Let his initial capital be $10. The game continues until the gambler’s capital either reduces to 0 or increases to$20. What is the length of the shortest possible game the gambler could play? What are the chances of this shortest possible game? What is the length of the second shortest possible game? How would you ﬁnd the probability of this second shortest possible game occurring?

_______________________________________________________________________________________________

### Key Concepts

1. The principle of ﬁrst-step analysis, also known as conditional expectations, provides equations for important properties of coin-ﬂipping games and random walks. The important properties include ruin probabilities and the duration of the game until ruin.
2. Diﬀerence equations derived from ﬁrst-step analysis or conditional expectations provide the way to deduce the expected length of the game in the gambler’s ruin, just as for the probability of ruin or victory.

__________________________________________________________________________

### Vocabulary

1. Expectation by conditioning is the process of deriving a diﬀerence equation for the expectation by conditioning the outcome over an exhaustive, mutually exclusive set of events, each of which leads to a simpler probability calculation, then weighting by the probability of each outcome of the conditioning events.
2. First Step Analysis is how J. Michael Steele refers to the simple expectation by conditioning process that we use to analyze the ruin probabilities and expected duration. It is a more speciﬁc description for coin-tossing games of the more general technique of expectation by conditioning.

__________________________________________________________________________

### Mathematical Ideas

#### Understanding a Stochastic Process

Start with a sequence of Bernoulli random variables, ${Y}_{1},{Y}_{2},{Y}_{3},\dots$ where ${Y}_{i}=+1$ with probability $p$ and ${Y}_{i}=-1$ with probability $q$. Start with an initial value ${T}_{0}$ and set ${Y}_{0}={T}_{0}$ for convenience. We deﬁne the sequence of sums ${T}_{n}={\sum }_{i=0}^{n}{Y}_{i}$. The goal is to understand the stochastic process ${T}_{1},{T}_{2},{T}_{3},\dots$. Look at how many trials the process will experience until it achieves the value $0$ or $a$. In symbols, consider . Look at the expected value of the number of trials, $D=𝔼\left[N\right]$. This is a special case of a larger class of probability problems called ﬁrst-passage distributions for ﬁrst-passage times.

The principle of ﬁrst-step analysis, also known as conditional expectations, provides equations for important properties of coin-ﬂipping games and random walks. The important properties include ruin probabilities and the duration of the game until ruin. Diﬀerence equations derived from ﬁrst-step analysis or conditional expectations provide the way to deduce the expected length of the game in the gambler’s ruin, just as for the probability of ruin or victory. Expectation by conditioning is the process of deriving a diﬀerence equation for the expectation by conditioning the outcome over an exhaustive, mutually exclusive set of events, each of which leads to a simpler probability calculation, then weighting by the probability of each outcome of the conditioning events. First Step Analysis is how J. Michael Steele refers to the simple expectation by conditioning process that we use to analyze the ruin probabilities and expected duration. It is a more speciﬁc description for coin-tossing games of the more general technique of expectation by conditioning.

#### Expected length of the game

Note that the following implicitly assumes that the expected duration of the game is ﬁnite. This fact is true, see below for a proof.

Theorem 1. The expected duration of the game in the classical ruin problem is

and

Proof. If the ﬁrst trial results in success, the game continues as if the initial position had been ${T}_{0}+1$. The conditional expectation of the duration conditioned on success at the ﬁrst trial is therefore ${D}_{{T}_{0}+1}+1$. Likewise if the ﬁrst trial results in a loss, the duration conditioned on the loss at the ﬁrst trial is ${D}_{{T}_{0}-1}+1$.

This argument shows that the expected duration satisﬁes the diﬀerence equation, obtained by expectation by conditioning

${D}_{{T}_{0}}=p{D}_{{T}_{0}+1}+q{D}_{{T}_{0}-1}+1$

with the boundary conditions

${D}_{0}=0,{D}_{a}=0.$

The appearance of the term 1 makes the diﬀerence equation non-homogeneous. Taking a cue from linear algebra, or more speciﬁcally the theory of linear non-homogeneous diﬀerential equations, we need to ﬁnd the general solution to the homogeneous equation

${D}_{{T}_{0}}^{h}=p{D}_{{T}_{0}+1}^{h}+q{D}_{{T}_{0}-1}^{h}$

and a particular solution to the non-homogeneous equation. We already know the general solution to the homogeneous equation is ${D}_{{T}_{0}}^{h}=A+B{\left(q∕p\right)}^{{T}_{0}}$. The best way to ﬁnd the particular solution is inspired guessing, based on good experience. Re-write the non-homogeneous equation for the particular solution as

$-1=p{D}_{{T}_{0}+1}-{D}_{{T}_{0}}+q{D}_{{T}_{0}-1}.$

The right side is a weighted second diﬀerence, a diﬀerence equation analog of the second derivative. Functions whose second derivative is a constant are quadratic functions. Therefore, it make sense to try a function of the form ${D}_{{T}_{0}}^{p}=k+l{T}_{0}+m{T}_{0}^{2}$. The exercises show that the particular solution is actually ${D}_{{T}_{0}}={T}_{0}∕\left(q-p\right)$ if $p\ne q$.

It follows that the general solution of the duration equation is:

${D}_{{T}_{0}}={T}_{0}∕\left(q-p\right)+A+B{\left(q∕p\right)}^{{T}_{0}}.$

The boundary conditions require that

$\begin{array}{llll}\hfill A+B& =0\phantom{\rule{2em}{0ex}}& \hfill & \\ \multicolumn{4}{c}{\text{and}}\\ \phantom{\rule{2em}{0ex}}\\ \hfill A+B{\left(q∕p\right)}^{a}+a∕\left(q-p\right)& =0.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Solving for $A$ and $B$,

${D}_{{T}_{0}}=\frac{{T}_{0}}{q-p}-\frac{a}{q-p}\frac{1-{\left(q∕p\right)}^{{T}_{0}}}{1-{\left(q∕p\right)}^{a}}.$

The calculations are not valid if $p=1∕2=q$. In this case, the particular solution ${T}_{0}∕\left(q-p\right)$ no longer makes sense for the equation

${D}_{{T}_{0}}=\frac{1}{2}{D}_{{T}_{0}+1}+\frac{1}{2}{D}_{{T}_{0}-1}+1$

The reasoning about the particular solution remains the same however, and the particular solution is $-{{T}_{0}}^{2}$. It follows that the general solution is of the form ${D}_{{T}_{0}}=-{{T}_{0}}^{2}+A+B{T}_{0}$. The required solution satisfying the boundary conditions is

${D}_{{T}_{0}}={T}_{0}\left(a-{T}_{0}\right).$

Corollary 1. Playing until ruin with no upper goal for victory against an inﬁnitely rich adversary, the expected duration of the game until ruin is

and

Proof. Pass to the limit $a\to \infty$ in the preceding formulas. □

#### Proof that the duration is ﬁnite

The following discussion of ﬁniteness of the duration of the game is adapted from [2] by J. Michael Steele.

The arguments for the probability of ruin or the duration of the game have a logical gap, assuming that the duration ${D}_{{T}_{0}}$ of the game is ﬁnite. Is it sure that the gambler’s net winnings will eventually reach $a$ or $0$? This important fact requires proof.

The proof uses a common argument in probability, an “extreme case argument”. Identify an “extreme” event with a small but positive probability of occurring. A complementary “good” event avoids the extreme event. Therefore the complementary “good” avoidance event must happen with probability not quite $1$. The avoidance must happen inﬁnitely many independent times, but the probability of such a run of “good” events must go to zero.

For the gambler’s ruin, the event of interest is the game continuing forever. Consider the extreme event that the gambler wins $a$ times in a row. If the gambler is not already ruined (at $0$), then such a streak of $a$ wins in a row is guaranteed to boost his fortune above $a$ and end the game in victory for the gambler. Such a run of luck is unlikely, but it has positive probability, in fact, probability $P={p}^{a}$. Let ${E}_{k}$ denote the event that the gambler wins on each turn in the time interval $\left[ka,\left(k+1\right)a-1\right]$, so the ${E}_{k}$ are independent events. Hence the complementary events ${E}_{k}^{C}=\Omega -{E}_{k}$ are also independent. Then $D>na$ at least implies that all of the ${E}_{k}$ with $0\le k\le n$ fail to occur. Thus, we ﬁnd

$ℙ\left[{D}_{{T}_{0}}>na\right]\le ℙ\left[\bigcap _{k=0}^{n}{E}_{k}^{C}\right]={\left(1-P\right)}^{n}.$

Note that

$ℙ\left[{D}_{{T}_{0}}=\infty \phantom{\rule{0.3em}{0ex}}|\phantom{\rule{0.3em}{0ex}}{T}_{0}=z\right]\le ℙ\left[D>na\phantom{\rule{0.3em}{0ex}}|\phantom{\rule{0.3em}{0ex}}{T}_{0}=z\right]$

for all $n$. Hence, $ℙ\left[{D}_{{T}_{0}}=\infty \right]=0$, justifying the earlier assumption that the duration must be ﬁnite.

The duration can be considerably longer than naively expected. For instance in a fair game, with two players with $500 each ﬂipping a coin until one is ruined, the average duration of the game is 250,000 trials. If a gambler has only$1 and his adversary $1000, with a fair coin toss, the average duration of the game is 999 trials, although some games will be quite short! Very long games can occur with suﬃcient probability to give a long average. #### Some Calculations for Illustration  $p$ $q$ ${T}_{0}$ $a$ Probability Expected of Ruin Duration 0.5 0.5 9 10 0.1000 9 0.5 0.5 90 100 0.1000 900 0.5 0.5 900 1,000 0.1000 90,000 0.5 0.5 950 1,000 0.0500 47,500 0.5 0.5 8,000 10,000 0.2000 16,000,000 0.45 0.55 9 10 0.2101 11 0.45 0.55 90 100 0.8656 766 0.45 0.55 99 100 0.1818 172 0.4 0.6 90 100 0.9827 441 0.4 0.6 99 100 0.3333 162 #### Application to Walks on a Circle Stand on a large clock, say on the $1$. Now ﬂip a coin and move ahead one hour if the coin turns up heads, and back one hour otherwise. Keep repeating the process until the walk visits all $12$ numbers. How long, on average, will this random walk take? If you generalize to clocks with $H$ positions (for the number of Hours), how does the expected time depend on $H$? Let ${V}_{k}$ be the set of hour marks visited at time $k$, that is, the collection of all vertices already visited up to this time. Let the random variable $t\left(n\right)=min\left\{k:|{V}_{k}|=n\right\}$ for $n\le H$ be the ﬁrst time to visit $n$ distinct vertices. Then for example it is always the case that $t\left(1\right)=0$, and $t\left(2\right)=1$. The goal is to calculate $𝔼\left[t\left(H\right)\right]$. The key realization is that the relation between $t\left(n\right)$ and $t\left(n+1\right)$ is simple. Consider the situation at time $t\left(n\right)$ for $n. The walk has visited $n$ vertices and we are at one extreme of the contiguous set of vertices in ${V}_{t\left(n\right)}$. Adding labels, without loss of generality, say the walk is at $0$, and the walk has visited $1,2,\dots ,n-1$. The next increment $t\left(n+1\right)-t\left(n\right)$ is the ﬁrst time the walk hits either $-1$ or $n$ (Note that in the case of $n=H$, these are the same vertex, but that is irrelevant). Therefore from the basic fact stated above: $𝔼\left[t\left(n+1\right)-t\left(n\right)\right]=1\cdot n=n$. Adding all the increments, the sum telescopes on the left to give the sum on the right $𝔼\left[t\left(n\right)\right]=1+\cdots +\left(n-1\right)=n\left(n-1\right)∕2$ for $n\le H$. In particular, $𝔼\left[t\left(H\right)\right]=H\left(H-1\right)∕2$. Remark. This conclusion is intuitively plausible for the following reason. At a given time, the “width” or span of the visited sites should be proportional to the square root of the time. In order to visit the full “width” or span $H∕2$ of sites on either side of the starting point, it should take about time about ${H}^{2}$. The example makes it explicit and precise. The following example appeared as a weekly puzzle at the website FiveThirtyEight, The Riddler, August 4, 2017.. A class of $30$ children is playing a game where they all stand in a circle along with their teacher. The teacher is holding two things: a coin and a potato. The game progresses like this: The teacher tosses the coin. Whoever holds the potato passes it to the left if the coin comes up heads and to the right if the coin comes up tails. The game ends when every child except one has held the potato, and the one who hasn’t is declared the winner. How do a child’s chances of winning change depending on where they are in the circle? In other words, what is each child’s win probability? In the previous section on the probability of ruin applied to show that the probability of any particular child winning the game is $1∕30$. Now the previous analysis shows that the expected duration of the game, when the potato has been in the hands of the teacher and exactly $29$ children, leaving only the winner, is $30\cdot 29∕2=435$ steps or passes of the potato. #### Sources This section is adapted from [2] with additional background information from [1]. The example of the walks on a circle is adapted from John Cook, Walks on a Clock. with additional comments from Jeﬀ Garrett. The example of the coin and potato game appeared as a weekly puzzle at the website FiveThirtyEight, The Riddler, August 4, 2017.. ### Algorithms, Scripts, Simulations #### Algorithm The goal is to simulate the duration until ruin or victory as a function of starting value. First set the probability $p$, number of Bernoulli trials $n$, and number of experiments $k$. Set the ruin and victory values $r$ and $v$, also interpreted as the boundaries for the random walk. For each starting value from ruin to victory, ﬁll an $n×k$ matrix with the Bernoulli random variables. Languages with multi-dimensional arrays keep the data in a three-dimensional array of size $n×k×\left(v-r+1\right)$. Cumulatively sum the Bernoulli random variables to create the fortune or random walk. For each starting value, for each random walk or fortune path, ﬁnd the duration until ruin or victory. For each starting value, ﬁnd the mean of the duration until ruin or victory. Finally, ﬁnd a least squares polynomial ﬁt for the duration as a function of the starting value. Geogebra GeoGebra script for duration.. Geogebra script for ruin probabilities. R 1p <- 0.5 2n <- 300 3k <- 200 4 5victory <- 10 6# top boundary for random walk 7ruin <- 0 8# bottom boundary for random walk 9interval <- victory - ruin + 1 10 11winLose <- 2 * (array( 0+(runif(n*k*interval) <= p), dim=c(n,k, 12interval))) - 1 13# 0+ coerces Boolean to numeric 14totals <- apply( winLose, 2:3, cumsum) 15# the second argument ‘‘2:3’’ means column-wise in each panel 16start <- outer( array(1, dim=c(n+1,k)), ruin:victory, "*") 17 18paths <- array( 0 , dim=c(n+1, k, interval) ) 19paths[2:(n+1), 1:k, 1:interval] <- totals 20paths <- paths + start 21 22hitVictory <- apply(paths, 2:3, (function(x)match(victory,x, nomatch=n+2))); 23hitRuin <- apply(paths, 2:3, (function(x)match(ruin, x, nomatch=n+2))); 24# the second argument ‘‘2:3’’ means column-wise in each panel 25# If no ruin or victory on a walk, nomatch=n+2 sets the hitting 26# time to be two more than the number of steps, one more than 27# the column length. Without the nomatch option, get NA which 28# works poorly with the comparison hitRuin < hitVictory next. 29 30duration <- pmin(hitVictory, hitRuin) - 1 31# Subtract 1 since R arrays are 1-based, so duration is 1 less than index 32is.na(duration) = duration > n 33# Remove durations greater than length of trials 34meanDuration = colMeans( duration, na.rm=TRUE) 35 36startValues <- (ruin:victory); 37durationFunction <- lm( meanDuration ~ poly(startValues,2,raw=TRUE) ) 38# lm is the R function for linear models, a more general view of 39# least squares linear fitting for response ~ terms 40 41plot(startValues, meanDuration, col = "blue"); 42lines(startValues, predict(durationFunction, data=startValues), col = "red") 43 44cat(sprintf("Duration function is: %f + %f x + %f x^2 \n", 45 coefficients(durationFunction)[1], coefficients(durationFunction)[2], 46 coefficients(durationFunction)[3] )) 47 48 0cAp0x1-1400049: Octave 1p = 0.5; 2n = 300; 3k = 200; 4 5victory = 10; 6# top boundary for random walk 7ruin = 0; 8# bottom boundary for random walk 9 10probRuinBeforeVictory = zeros(1, victory-ruin+1); 11for start = ruin:victory 12 13 winLose = 2 * (rand(n,k) <= p) - 1; 14 # -1 for Tails, 1 for Heads 15 totals = cumsum(winLose); 16 # -n..n (every other integer) binomial rv sample 17 18 paths = [zeros(1,k); totals] + start; 19 victoryOrRuin = zeros(1,k); 20 for j = 1:k 21 hitVictory = find(paths(:,j) >= victory); 22 hitRuin = find(paths(:,j) <= ruin); 23 if ( !rows(hitVictory) && !rows(hitRuin) ) 24 # no victory, no ruin 25 # do nothing 26 elseif ( rows(hitVictory) && !rows(hitRuin) ) 27 # victory, no ruin 28 victoryOrRuin(j) = hitVictory(1)-1; 29 ## subtract 1 since vectors are 1-based 30 elseif ( !rows(hitVictory) && rows(hitRuin) ) 31 # no victory, but hit ruin 32 victoryOrRuin(j) = -(hitRuin(1)-1); 33 ## subtract 1 since vectors are 1-based 34 else # ( rows(hitvictory) && rows(hitruin) ) 35 # victory and ruin 36 if ( hitVictory(1) < hitRuin(1) ) 37 victoryOrRuin(j) = hitVictory(1)-1; 38 # code hitting victory 39 ## subtract 1 since vectors are 1-based 40 else 41 victoryOrRuin(j) = -(hitRuin(1)-1); 42 # code hitting ruin as negative 43 ## subtract 1 since vectors are 1-based 44 endif 45 endif 46 endfor 47 48 durationUntilVictoryOrRuin(start + (-ruin+1)) = mean(abs( victoryOrRuin )); 49 50endfor 51 52durationFunction = polyfit([ruin:victory], durationUntilVictoryOrRuin, \ 53 2); 54 55plot([ruin:victory], durationUntilVictoryOrRuin, +r); 56hold on; 57 58x = linspace(ruin, victory, 101); 59fittedDuration = polyval(durationFunction, x); 60plot(x, fittedDuration, -); 61hold off; 62 63disp("Duration function is a_2 + a_1 x + a_0 x^2 where:") 64disp("a_2"), disp(durationFunction(3)), 65disp("a_1"), disp(durationFunction(2)), 66disp("a_0"), disp(durationFunction(1)) 67 68 0cAp1x1-1400069: Perl 1use PDL::NiceSlice; 2 3$p        = 0.5;
4$n = 300; 5$k        = 200;
6$victory = 10; 7$ruin     = 0;
8$interval =$victory - $ruin + 1; 9$winLose  = 2 * ( random( $k,$n, $interval ) <=$p ) - 1;
10$totals = ( cumusumover$winLose->xchg( 0, 1 ) )->transpose;
11$start = zeroes($k, $n + 1,$interval )->zlinvals( $ruin,$victory );
12
13$paths = zeroes($k, $n + 1,$interval );
14
15# use PDL:NiceSlice on next line
16# Note the use of the concat operator here.
17$paths ( 0 : ($k - 1 ), 1 : $n, 0 : ($interval - 1 ) ) .= $totals; 18 19$paths      = $paths +$start;
20$hitVictory =$paths->setbadif( $paths <$victory );
21$hitRuin =$paths->setbadif( $paths >$ruin );
22
23$victoryIndex = 24 ($hitVictory ( ,, : )->xchg( 0, 1 )->minimum_ind )
25    ->inplace->setbadtoval( $n + 1 ); 26$ruinIndex =
27    ( $hitRuin ( ,, : )->xchg( 0, 1 )->maximum_ind ) 28 ->inplace->setbadtoval($n + 1 );
29
30$durationUntilRuinOrVictory = 31 ($victoryIndex->glue( 2, $ruinIndex )->xchg( 2, 1 ) )->xchg( 0, 1 ) 32 ->setvaltobad($n + 1 )->minimum;
33( $mean,$prms, $median,$min, $max,$adev, $rms ) = 34 statsover($durationUntilRuinOrVictory);
35
36use PDL::Fit::Polynomial;
37$x = zeroes($interval)->xlinvals( $ruin,$victory );
38( $ruinFunction,$coeffs ) = fitpoly1d $x,$mean, 3;
39print "Duration function is: ", $coeffs (0), "+",$coeffs (1), "x+",
40    $coeffs (2), "x^2", "\n"; 41 0cAp2x1-1400042: SciPy 1import scipy 2 3p = 0.5 4n = 300 5k = 200 6victory = 10; 7ruin = 0; 8interval = victory - ruin + 1 9 10winLose = 2*( scipy.random.random((n,k,interval)) <= p ) - 1 11totals = scipy.cumsum(winLose, axis = 0) 12 13start = scipy.multiply.outer( scipy.ones((n+1,k), dtype=int), scipy.arange(ruin, victory+1, dtype=int)) 14paths = scipy.zeros((n+1,k,interval), dtype=int) 15paths[ 1:n+1, :,:] = totals 16paths = paths + start 17 18def match(a,b,nomatch=None): 19 return b.index(a) if a in b else nomatch 20# arguments: a is a scalar, b is a python list, value of nomatch is scalar 21# returns the position of first match of its first argument in its second argument 22# but if a is not there, returns the value nomatch 23# modeled on the R function "match", but with less generality 24 25hitVictory = scipy.apply_along_axis(lambda x:( match(victory,x.tolist(),nomatch=n+2)), 0, paths) 26hitRuin = scipy.apply_along_axis(lambda x:match(ruin,x.tolist(),nomatch=n+2), 0, paths) 27# If no ruin or victory on a walk, nomatch=n+2 sets the hitting 28# time to be two more than the number of steps, one more than 29# the column length. 30durationUntilRuinOrVictory = scipy.minimum(hitVictory, hitRuin) 31 32import numpy.ma 33durationMasked = scipy.ma.masked_greater(durationUntilRuinOrVictory, n) 34 35meanDuration = scipy.mean(durationUntilRuinOrVictory, axis = 0) 36durationFunction = scipy.polyfit( scipy.arange(ruin, victory+1, dtype=int), meanDuration, 2) 37print "Duration function is: ", durationFunction[2], "+", durationFunction[1], "x+", durationFunction[0], "x^2" 38# should return coefficients to (x-ruin)*(victory - x), descending degree order 39 0cAp3x1-1400040: 1from random import random 2from statistics import mean, stdev 3 4p = 0.5 5MAXH = 12+1 6N = 100000 7 8 9def time_to_cover_circle(H): 10 not_yet_reached = set(range(H)) 11 count, pos = 0, 0 12 while not_yet_reached: 13 pos += 1 if random() > p else -1 14 pos %= H 15 not_yet_reached.discard(pos) 16 if not_yet_reached: 17 count += 1 18 else: 19 return count 20 21 22for H in range(3, MAXH): 23 lengths_walks = [] 24 for i in range(N): 25 lengths_walks.append(time_to_cover_circle(H)) 26 print("Clock: ", H, "Mean: ", mean(lengths_walks), 27 "StdDev:", stdev(lengths_walks)) 0cAp4x1-1400028: _______________________________________________________________________________________________ ### Problems to Work for Understanding 1. Using a trial function of the form ${D}_{{T}_{0}}^{p}=k+l{T}_{0}+m{T}_{0}^{2}$, show that a particular solution of the non-homogeneous equation ${D}_{{T}_{0}}=p{D}_{{T}_{0}+1}+q{D}_{{T}_{0}-1}+1$ is ${T}_{0}∕\left(q-p\right)$. 2. Using a trial function of the form ${D}_{{T}_{0}}^{p}=k+l{T}_{0}+m{T}_{0}^{2}$, show that a particular solution of the non-homogeneous equation ${D}_{{T}_{0}}=\frac{1}{2}{D}_{{T}_{0}+1}+\frac{1}{2}{D}_{{T}_{0}-1}+1$ is $-{T}_{0}^{2}$. 1. A gambler starts with$2 and wants to win $2 more to get to a total of$4 before being ruined by losing all his money. He plays a coin-ﬂipping game, with a coin that changes with his fortune.
1. If the gambler has $2 he plays with a coin that gives probability $p=1∕2$ of winning a dollar and probability $q=1∕2$ of losing a dollar. 2. If the gambler has$3 he plays with a coin that gives probability $p=1∕4$ of winning a dollar and probability $q=3∕4$ of losing a dollar.
3. If the gambler has $1 he plays with a coin that gives probability $p=3∕4$ of winning a dollar and probability $q=1∕4$ of losing a dollar. Use “ﬁrst step analysis” to write three equations in three unknowns (with two additional boundary conditions) that give the expected duration of the game that the gambler plays. Solve the equations to ﬁnd the expected duration. 2. A gambler plays a coin ﬂipping game in which the probability of winning on a ﬂip is $p=0.4$ and the probability of losing on a ﬂip is $q=1-p=0.6$. The gambler wants to reach the victory level of$16 before being ruined with a fortune of $0. The gambler starts with$8, bets $2 on each ﬂip when the fortune is$6, $8,$10 and bets $4 when the fortune is$4 or $12. Compute the probability of ruin in this game. 3. This problem is adapted from Stochastic Calculus and Financial Applications by J. Michael Steele, Springer, New York, 2001, Chapter 1, Section 1.6, page 9. Information on buy-backs is adapted from investorwords.com. This problem suggests how results on biased random walks can be worked into more realistic models. Consider a naive model for a stock that has a support level of$20/share because of a corporate buy-back program. (This means the company will buy back stock if shares dip below $20 per share. In the case of stocks, this reduces the number of shares outstanding, giving each remaining shareholder a larger percentage ownership of the company. This is usually considered a sign that the company’s management is optimistic about the future and believes that the current share price is undervalued. Reasons for buy-backs include putting unused cash to use, raising earnings per share, increasing internal control of the company, and obtaining stock for employee stock option plans or pension plans.) Suppose also that the stock price moves randomly with a downward bias when the price is above$20, and randomly with an upward bias when the price is below $20. To make the problem concrete, we let ${Y}_{n}$ denote the stock price at time $n$, and we express our stock support hypothesis by the assumptions that $\begin{array}{rcll}ℙ\left[{Y}_{n+1}=21|{Y}_{n}=20\right]& =& 9∕10& \text{}\\ ℙ\left[{Y}_{n+1}=19|{Y}_{n}=20\right]& =& 1∕10& \text{}\end{array}$ We then reﬂect the downward bias at price levels above$20 by requiring that for $k>20$:

$\begin{array}{rcll}ℙ\left[{Y}_{n+1}=k+1|{Y}_{n}=k\right]& =& 1∕3& \text{}\\ ℙ\left[{Y}_{n+1}=k-1|{Y}_{n}=k\right]& =& 2∕3.& \text{}\end{array}$

We then reﬂect the upward bias at price levels below $20 by requiring that for $k<20$: $\begin{array}{rcll}ℙ\left[{Y}_{n+1}=k+1|{Y}_{n}=k\right]& =& 2∕3& \text{}\\ ℙ\left[{Y}_{n+1}=k-1|{Y}_{n}=k\right]& =& 1∕3& \text{}\end{array}$ Using the methods of “single-step analysis” calculate the expected time for the stock to fall from$25 through the support level all the way down to \$18. (There is no reasonable way to solve this problem using formulas. Instead you will have to go back to basic principles of single-step or ﬁrst-step analysis to solve the problem.)

4. Several North American professional sports leagues have a “best-of-seven” format for their seasonal championship (the World Series for Major League Baseball, the NBA Finals for the National Basketball Association and the Stanley Cup Finals for the National Hockey League.) A best-of-seven playoﬀ is a sequence of games between two teams in which one team must win four games to win the series. If one team has won four games before all seven games have been played, remaining games are omitted.
1. Explain why or why not the ﬁrst-step analysis for the expected duration model for victory-or-ruin is sensible for estimating the expected length of a best-of-seven championship series.
2. How many games would we expect to be needed to complete a best-of-seven series if each team has a $50$ percent chance of winning each individual game? What modeling assumptions are you making?
3. Using the same assumptions how many games are expected to complete a best-of-seven series if one team has a $60$ percent chance of winning each individual game? A $70$ per cent chance?
4. Using the same assumptions, make a graph of the expected number of games as a function of $p$, the probability of one team winning an individual game.
5. Perform some simulations of the coin-ﬂipping game, varying $p$ and the start value. How does the value of $p$ aﬀect the experimental duration of victory and ruin?
6. Modify the simulations by changing the value of $p$ and comparing the experimental results for each starting value to the theoretical duration function.
7. Modify the duration scripts to perform simulations of the duration calculations in the table in the section Some Calculations for Illustration and compare the results.

__________________________________________________________________________

### References

[1]   William Feller. An Introduction to Probability Theory and Its Applications, Volume I, volume I. John Wiley and Sons, third edition, 1973. QA 273 F3712.

[2]   J. Michael Steele. Stochastic Calculus and Financial Applications. Springer-Verlag, 2001. QA 274.2 S 74.

__________________________________________________________________________

1. Virtual Labs in Probability. Games of Chance. Scroll down and select the Red and Black Experiment (marked in red in the Applets Section. Read the description since the scenario is slightly diﬀerent but equivalent to the description above.)
2. University of California, San Diego, Department of Mathematics, A.M. Garsia. A java applet that simulates how long it takes for a gambler to go broke. You can control how much money you and the casino start with, the house odds, and the maximum number of games. Results are a graph and a summary table. Submitted by Matt Odell, September 8, 2003.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.