Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and
Advanced Mathematical Finance

__________________________________________________________________________

Ruin Probabilities

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

Rating

Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

Section Starter Question

Section Starter Question

What is the solution of the recurrence equation xn = axn1 where a is a constant? What kind of a function is the solution? What more, if anything, needs to be known to obtain a complete solution?

_______________________________________________________________________________________________

Key Concepts

Key Concepts

  1. The probabilities, interpretation, and consequences of the “gambler’s ruin”.

__________________________________________________________________________

Vocabulary

Vocabulary

  1. Classical Ruin Problem “Consider the familiar gambler who wins or loses a dollar with probabilities p and q = 1 p, respectively playing against an infinitely rich adversary who is always willing to play although the gambler has the privilege of stopping at his pleasure. The gambler adopts the strategy of playing until he either loses his capital (“is ruined”) or increases it to a (with a net gain of a T0.) We are interested in the probability of the gambler’s ruin and the probability distribution of the duration of the game. This is the classical ruin problem.”. (From W. Feller, in Introduction to Probability Theory and Applications, Volume I, Chapter XIV, page 342. [1])

__________________________________________________________________________

Mathematical Ideas

Mathematical Ideas

Understanding a Stochastic Process

We consider a sequence of Bernoulli random variables, Y 1,Y 2,Y 3, where Y i = +1 with probability p and Y i = 1 with probability q. We start with an initial value T0 > 0. We define the sequence of sums Tn = i=0nY i. We are interested in the stochastic process T1,T2,T3,. It turns out this is a complicated sequence to understand in full, so we single out particular simpler features to understand first. For example, we can look at the probability that the process will achieve the value 0 before it achieves the value a. This is a special case of a larger class of probability problems called first-passage probabilities.

Theorems about Ruin Probabilities

Consider a gambler who wins or loses a dollar on each turn of a game with probabilities p and q = 1 p respectively. Let his initial capital be T0 > 0. The game continues until the gambler’s capital either is reduced to 0 or has increased to a. Let qT0 be the probability of the gambler’s ultimate ruin and pT0 the probability of his winning. We shall show later that (see also Duration of the Game Until Ruin..)

pT0 + qT0 = 1

so that we need not consider the possibility of an unending game.

Theorem 1. The probability of the gambler’s ruin is

qT0 = (qp)a (qp)T0 (qp)a 1

if pq and

qT0 = 1 T0a

if p = q = 12.

Proof. The proof uses a first step analysis considering how the probabilities change after one step or trial. After the first trial the gambler’s fortune is either T0 1 or T0 + 1 and therefore we must have

qT0 = pqT0+1 + qqT01 (1)

provided 1 < T0 < a 1. For T0 = 1, the first trial may lead to ruin, and (1) is replaced by

q1 = pq2 + q.

Similarly, for T0 = a 1 the first trial may result in victory, and therefore

qa1 = qqa2.

To unify our equations, we define as a natural convention that q0 = 1, and qa = 0. Then the probability of ruin satisfies (1) for T0 = 1, 2,,a 1. This defines a set of a 1 difference equations, with boundary conditions at 0 and a. If we solve the system of difference equations, then we will have the desired probability qT0 for any value of T0.

Note that we can rewrite the difference equations as

pqT0 + qqT0 = pqT0+1 + qqT01.

Then we can rearrange and factor to obtain

qT0+1 qT0 qT0 qT01 = q p.

This says the ratio of successive differences of qT0 is constant. This suggests that qT0 is a power function,

qT0 = λT0

since power functions have this property.

We first take the case when pq. Then based on the guess above (or also on standard theory for linear difference equations), we try a solution of the form qT0 = λT0. That is

λT0 = pλT0+1 + qλT01.

This reduces to

pλ2 λ + q = 0.

Since p + q = 1, this factors as

(pλ q)(λ 1) = 0,

so the solutions are λ = qp, and λ = 1. (One could also use the quadratic formula to obtain the same values.) Again by the standard theory of linear difference equations, the general solution is

qT0 = A 1 + B (qp)T0 (2)

for some constants A, and B.

Now we determine the constants by using the boundary conditions:

q0 = A + B = 1 qa = A + B(qp)a = 0.

Solving, substituting, and simplifying:

qT0 = (qp)a (qp)T0 (qp)a 1 .

(Check for yourself that with this expression 0 qT0 1 as it should be a for a probability.)

We should show that the solution is unique. So suppose rT0 is another solution of the difference equations. Given an arbitrary solution of (1), the two constants A and B can be determined so that (2) agrees with rT0 at T0 = 0 and T0 = a. (The reader should be able to explain why by reference to a theorem in Linear Algebra!) From these two values, all other values can be found by substituting in (1) successively T0 = 1, 2, 3, Therefore, two solutions which agree for T0 = 0 and T0 = 1 are identical, hence every solution is of the form (2).

The solution breaks down if p = q = 12, since then we do not get two linearly independent solutions of the difference equation (we get the solution 1 repeated twice). Instead, we need to borrow a result from differential equations (from the variation-of-parameters/reduction-of-order set of ideas used to derive a complete linearly independent set of solutions.) Certainly, 1 is still a solution of the difference equation (1). A second linearly independent solution is T0, (check it out!) and the general solution is qT0 = A + BT0. To satisfy the boundary conditions, we must put A = 1, and A + Ba = 0, hence qT0 = 1 T0a. □

We can consider a symmetric interpretation of this gambling game. Instead of a single gambler playing at a casino, trying to make a goal a before being ruined, consider two gamblers Alice and Bill playing against each other. Let Alice’s initial capital be T0 and let her play against adversary Bill with initial capital a T0 so that their combined capital is a. The game continues until one gambler’s capital either is reduced to zero or has increased to a, that is, until one of the two players is ruined.

Corollary 1. pT0 + qT0 = 1

Proof. The probability pT0 of Alice’s winning the game equals the probability of Bill’s ruin. Bill’s ruin (and Alice’s victory) is therefore obtained from our ruin formulas on replacing p, q, and T0 by q, p, and a T0 respectively. That is, from our formula (for pq) the probability of Alice’s ruin is

qT0 = (qp)a (qp)T0 (qp)a 1

and the probability of Bill’s ruin is

pT0 = (pq)a (pq)aT0 (pq)a 1 .

Then add these together, and after some algebra, the total is 1. (Check it out!)

For p = 12 = q, the proof is simpler, since then pT0 = 1 (a T0)a, and qT0 = 1 T0a, and pT0 + qT0 = 1 easily. □

Corollary 2. The expected gain is 𝔼 G = (1 qT0)a T0.

Proof. In the game, the gambler’s ultimate gain (or loss!) is a Bernoulli (two-valued) random variable, G, where G assumes the value T0 with probability qT0, and assumes the value a T0 with probability pT0. Thus the expected value is

𝔼 G = (a T0)pT0 + (T0)qT0 = pT0a T0 = (1 qT0)a T0.

Now notice that if q = 12 = p, so that we are dealing with a fair game, then 𝔼 G = (1 (1 T0a)) a T0 = 0. That is, a fair game in the short run (one trial) is a fair game in the long run (expected value). However, if p < 12 < q, so the game is not fair then our expectation formula says

𝔼 G = 1 (qp)a (qp)T0 (qp)a 1 a T0 = (qp)T0 1 (qp)a 1 a T0 = [(qp)T0 1]a [(qp)a 1]T0 1 T0

The sequence [(qp)n 1]n is an increasing sequence, so

[(qp)T0 1]a [(qp)a 1]T0 1 < 0.

This shows that an unfair game in the short run (one trial) is an unfair game in the long run.

Corollary 3. The probability of ultimate ruin of a gambler with initial capital T0 playing against an infinitely rich adversary is

qT0 = 1,p q

and

qT0 = (qp)T0 ,p > q.

Proof. Let a in the formulas. (Check it out!) □

Remark. This corollary says that the probability of “breaking the bank at Monte Carlo” as in the movies is zero, at least for the simple games we are considering.

Some Calculations for Illustration









pqT0aProbabilityProbabilityExpected
of Ruin of VictoryGain








0.5 0.59 10 0.1000 0.90000
0.5 0.5 90 100 0.1000 0.9000 0
0.5 0.5900 1,000 0.1000 0.90000
0.5 0.5950 1,000 0.0500 0.95000
0.5 0.58,000 10,000 0.2000 0.80000








0.45 0.559 10 0.2101 0.7899-1
0.45 0.55 90 100 0.8656 0.1344 -77
0.45 0.5599 100 0.1818 0.8182-17
0.4 0.6 90 100 0.9827 0.0173 -88
0.4 0.699 100 0.3333 0.6667-32








Why do we hear about people who actually win?

We often hear from people who consistently win at the casino. How can this be in the face of the theorems above?

A simple illustration makes clear how this is possible. Assume for convenience a gambler who repeatedly visits the casino, each time with a certain amount of capital. His goal is to win 1/9 of his capital. That is, in units of his initial capital T0 = 9, and a = 10. Assume too that the casino is fair so that p = 12 = q, then the probability of ruin in any one year is:

qT0 = 1 910 = 110.

That is, if the working capital is much greater than the amount required for victory, then the probability of ruin is reasonably small.

Then the probability of an unbroken string of ten successes in ten years is:

(1 110)10 exp(1) 0.37

This much success is reasonable, but simple psychology would suggest the gambler would boast about his skill instead of crediting it to luck. Moreover, simple psychology suggests the gambler would also blame one failure on oversight, momentary distraction, or even cheating by the casino!

Another Interpretation as a Random Walk

Another common interpretation of this probability game is to imagine it as a random walk. That is, we imagine an individual on a number line, starting at some position T0. The person takes a step to the right to T0 + 1 with probability p and takes a step to the left to T0 1 with probability q and continues this random process. Then instead of the total fortune at any time, we consider the geometric position on the line at any time. Instead of reaching financial ruin or attaining a financial goal, we talk instead about reaching or passing a certain position. For example, Corollary 3 says that if p q, then the probability of visiting the origin before going to infinity is 1. The two interpretations are equivalent and either can be used depending on which is more useful. The problems below use the random walk interpretation, because they are more naturally posed in terms of reaching or passing certain points on the number line.

The interpretation as Markov Processes and Martingales

The fortune in the coin-tossing game is the first and simplest encounter with two of the most important ideas in modern probability theory.

We can interpret the fortune in our gambler’s coin-tossing game as a Markov process. That is, at successive times the process is in various states. In our case, the states are the values of the fortune. The probability of passing from one state at the current time t to another state at time t + 1 is completely determined by the present state. That is, for our coin-tossing game

Tt+1 = x + 1|Tt = x = p Tt+1 = x 1|Tt = x = q Tt+1 = y|Tt = x = 0 for all yx + 1,x 1

The most important property of a Markov process is that the probability of being in the next state is completely determined by the current state and not the history of how the process arrived at the current state. In that sense, we often say that a Markov process is memory-less.

We can also note the fair coin-tossing game with p = 12 = q is a martingale. That is, the expected value of the process at the next step is the current value. Using expectation for estimation, the best estimate we have of the gambler’s fortune at the next step is the current fortune:

𝔼 Tn+1|Tn = x = (x + 1)(12) + (x 1)(12) = x.

This characterizes a fair game, after the next step, one can neither expect to be richer or poorer. Note that the coin-tossing games with pq do not have this property.

In later sections we have more occasions to study the properties of martingales, and to a lesser degree Markov processes.

Sources

This section is adapted from W. Feller, in Introduction to Probability Theory and Applications, Volume I, Chapter XIV, page 342, [1]. Some material is adapted from [3] and [2]. Steele has an excellent discussion at about the same level as here, but with a slightly more rigorous approach to solving the difference equations. He also gives more information about the fact that the duration is almost surely finite, showing that all moments of the duration are finite. Karlin and Taylor give a treatment of the ruin problem by direct application of Markov chain analysis, which is not essentially different, but points to greater generality.

_______________________________________________________________________________________________

Algorithms, Scripts, Simulations

Algorithms, Scripts, Simulations

Algorithm

The goal is to simulate the probability function for ruin with a given starting value. First set the probability p, number of Bernoulli trials n, and number of experiments k. Set the ruin and victory values r and v, the boundaries for the random walk. For each starting value from ruin to victory, fill an n × k matrix with the Bernoulli random variables. For languages with multi-dimensional arrays each the data is kept in a three-dimensional array of size n × k × (v r + 1). Cumulatively sum the Bernoulli random variables to create the fortune or random walk. For each starting value, for each random walk or fortune path, find the step where ruin or victory is encountered. For each starting value, find the proportion of fortunes encountering ruin. Finally, find a least squares linear fit of the ruin probabilities as a function of the starting value.

Scripts

Geogebra
Geogebra script for ruin probabilities.
+
R

R script for ruin probabilities.

1 
2p <- 0.5 
3n <- 150 
4k <- 60 
5 
6victory <- 10 
7# top boundary for random walk 
8ruin <- -10 
9# bottom boundary for random walk 
10interval <- victory - ruin + 1 
11 
12winLose <- 2 * (array( 0+(runif(n*k*interval) <= p), dim=c(n,k, 
13interval))) - 1 
14# 0+ coerces Boolean to numeric 
15totals <- apply( winLose, 2:3, cumsum) 
16# the second argument ‘‘2:3’’ means column-wise in each panel 
17start <- outer( array(1, dim=c(n+1,k)), ruin:victory, "*") 
18 
19paths <- array( 0 , dim=c(n+1, k, interval) ) 
20paths[2:(n+1), 1:k, 1:interval] <- totals 
21paths <- paths + start 
22 
23hitVictory <- apply(paths, 2:3, (function(x)match(victory,x, nomatch=n+2))); 
24hitRuin    <- apply(paths, 2:3, (function(x)match(ruin,   x, nomatch=n+2))); 
25# the second argument ‘‘2:3’’ means column-wise in each panel 
26# If no ruin or victory on a walk, nomatch=n+2 sets the hitting 
27# time to be two more than the number of steps, one more than 
28# the column length.  Without the nomatch option, get NA which 
29# works poorly with the comparison hitRuin < hitVictory next. 
30 
31probRuinBeforeVictory <- 
32     apply( (hitRuin < hitVictory), 2, 
33   (function(x)length((which(x,arr.ind=FALSE)))) )/k 
34 
35startValues <- (ruin:victory); 
36ruinFunction <- lm(probRuinBeforeVictory ~ startValues) 
37# lm is the R function for linear models, a more general view of 
38# least squares linear fitting for response ~ terms 
39cat(sprintf("Ruin function Intercept: %f \n", coefficients(ruinFunction)[1] )) 
40cat(sprintf("Ruin function Slope: %f \n", coefficients(ruinFunction)[2] )) 
41 
42plot(startValues, probRuinBeforeVictory); 
43abline(ruinFunction) 
44 
45  
Octave

Octave script for ruin probabilities.

1p = 0.5; 
2n = 150; 
3k = 60; 
4 
5victory =  10; 
6# top boundary for random walk 
7ruin    = -10; 
8# bottom boundary for random walk 
9 
10probRuinBeforeVictory = zeros(1, victory-ruin+1); 
11for start = ruin:victory 
12 
13    winLose = 2 * (rand(n,k) <= p) - 1; 
14    # -1 for Tails, 1 for Heads 
15    totals = cumsum(winLose); 
16    # -n..n (every other integer) binomial rv sample 
17 
18    paths = [zeros(1,k); totals] + start; 
19    victoryOrRuin = zeros(1,k); 
20    for j = 1:k 
21  hitVictory = find(paths(:,j) >= victory); 
22  hitRuin  = find(paths(:,j) <= ruin); 
23  if ( !rows(hitVictory) && !rows(hitRuin) ) 
24     # no victory, no ruin 
25     # do nothing 
26  elseif ( rows(hitVictory) && !rows(hitRuin) ) 
27     # victory, no ruin 
28     victoryOrRuin(j) = hitVictory(1); 
29  elseif ( !rows(hitVictory) && rows(hitRuin) ) 
30     # no victory, but hit ruin 
31     victoryOrRuin(j) = -hitRuin(1); 
32  else # ( rows(hitvictory) && rows(hitruin) ) 
33     # victory and ruin 
34     if ( hitVictory(1) < hitRuin(1) ) 
35       victoryOrRuin(j) = hitVictory(1); 
36       # code hitting victory 
37     else 
38       victoryOrRuin(j) = -hitRuin(1); 
39       # code hitting ruin as negative 
40     endif 
41  endif 
42    endfor 
43 
44    probRuinBeforeVictory(start + (-ruin+1)) = sum( victoryOrRuin < 0 )/k; 
45#   probRuinBeforeVictory 
46endfor 
47 
48function coeff = least_square (x,y) 
49  n = length(x); 
50  A = [x ones(n,1)]; 
51  coeff = A\y; 
52  plot(x,y,x); 
53  hold on 
54  interv = [min(x) max(x)]; 
55  plot(interv,coeff(1)*interv+coeff(2)); 
56end 
57 
58rf = least_square(transpose( ruin : victory ), transpose(probRuinBeforeVictory)); 
59disp("Ruin function Intercept:"), disp(rf(2)) 
60disp("Ruin function Slope:"), disp(rf(1)) 
61hold off 
62 
63  
Perl

Perl PDL script for ruin probabilities.

1use PDL::NiceSlice; 
2 
3$p        = 0.5; 
4$n        = 150; 
5$k        = 60; 
6$victory  = 10; 
7$ruin     = -10; 
8$interval = $victory - $ruin + 1; 
9$winLose  = 2 * ( random( $k, $n, $interval ) <= $p ) - 1; 
10$totals   = ( cumusumover $winLose->xchg( 0, 1 ) )->transpose; 
11$start    = zeroes( $k, $n + 1, $interval )->zlinvals( $ruin, $victory ); 
12 
13$paths = zeroes( $k, $n + 1, $interval ); 
14 
15# use PDL:NiceSlice on next line 
16$paths ( 0 : ( $k - 1 ), 1 : $n, 0 : ( $interval - 1 ) ) .= $totals; 
17 
18# Note the use of the concat operator here. 
19$paths      = $paths + $start; 
20$hitVictory = $paths->setbadif( $paths < $victory ); 
21$hitRuin    = $paths->setbadif( $paths > $ruin ); 
22 
23$victoryIndex = 
24    ( $hitVictory ( ,, : )->xchg( 0, 1 )->minimum_ind ) 
25    ->inplace->setbadtoval( $n + 1 ); 
26$ruinIndex = 
27    ( $hitRuin ( ,, : )->xchg( 0, 1 )->maximum_ind ) 
28    ->inplace->setbadtoval( $n + 1 ); 
29 
30$probRuinBeforeVictory = sumover( float( $ruinIndex < $victoryIndex ) ) / $k; 
31 
32use PDL::Fit::Linfit; 
33$x = zeroes($interval)->xlinvals( $ruin, $victory ); 
34$fitFuncs = cat ones($interval), $x; 
35( $ruinFunction, $coeffs ) = linfit1d $probRuinBeforeVictory, $fitFuncs; 
36print "Ruin function Intercept:", $coeffs (0), "\n"; 
37print "Ruin function Slope:",     $coeffs (1), "\n"; 
38 
39  
SciPy

Scientific Python script for ruin probabilities.

1import scipy 
2 
3p = 0.5 
4n = 150 
5k = 60 
6victory = 10; 
7ruin = -10; 
8interval = victory - ruin + 1; 
9 
10winLose = 2*( scipy.random.random((n,k,interval)) <= p ) - 1 
11totals = scipy.cumsum(winLose, axis = 0) 
12 
13start = scipy.multiply.outer( scipy.ones((n+1,k), dtype=int), scipy.arange(ruin, victory+1, dtype=int)) 
14paths = scipy.zeros((n+1,k,interval), dtype=int) 
15paths[ 1:n+1, :,:] = totals 
16paths = paths + start 
17 
18def match(a,b,nomatch=None): 
19    return  b.index(a) if a in b else nomatch 
20# arguments: a is a scalar, b is a python list, value of nomatch is scalar 
21# returns the position of first match of its first argument in its second argument 
22# but if a is not there, returns the value nomatch 
23# modeled on the R function "match", but with less generality 
24 
25hitVictory = scipy.apply_along_axis(lambda x:( match(victory,x.tolist(),nomatch=n+2)), 0, paths) 
26hitRuin = scipy.apply_along_axis(lambda x:match(ruin,x.tolist(),nomatch=n+2), 0, paths) 
27# If no ruin or victory on a walk, nomatch=n+2 sets the hitting 
28# time to be two more than the number of steps, one more than 
29# the column length. 
30 
31probRuinBeforeVictory = scipy.mean( (hitRuin < hitVictory), axis=0) 
32# note that you can treat the bools as binary data! 
33 
34ruinFunction = scipy.polyfit( scipy.arange(ruin, victory+1, dtype=int), probRuinBeforeVictory, 1) 
35print "Ruin function Intercept:", ruinFunction[1]; 
36print "Ruin function Slope:", ruinFunction[0]; 
37# should return a slope near -1/(victory-ruin) and an intercept near 0.5 
38 
39  

__________________________________________________________________________

Problems to Work

Problems to Work for Understanding

  1. Consider the ruin probabilities qT0 as a function of T0. What is the domain of qT0 ? What is the range of qT0 ? Explain heuristically why qT0 is decreasing as a function of T0.
  2. Show that power functions have the property that the ratio of successive differences is constant.
  3. Show the sequence [(qp)n 1]n is an increasing sequence for 0 < p < 12 < q < 1.
  4. In a random walk starting at the origin find the probability that the point a > 0 will be reached before the point b < 0.
  5. James Bond wants to ruin the casino at Monte Carlo by consistently betting 1 Euro on Red at the roulette wheel. The probability of Bond winning at one turn in this game is 1838 0.474. James Bond, being Agent 007, is backed by the full financial might of the British Empire, and so can be considered to have unlimited funds. Approximately how much money should the casino have to start with so that Bond has only a “one-in-a-million” chance of ruining the casino?
  6. A gambler starts with $2 and wants to win $2 more to get to a total of $4 before being ruined by losing all his money. He plays a coin-flipping game, with a coin that changes with his fortune.
    1. If the gambler has $2 he plays with a coin that gives probability p = 12 of winning a dollar and probability q = 12 of losing a dollar.
    2. If the gambler has $3 he plays with a coin that gives probability p = 14 of winning a dollar and probability q = 34 of losing a dollar.
    3. If the gambler has $1 he plays with a coin that gives probability p = 34 of winning a dollar and probability q = 14 of losing a dollar.

    Use “first step analysis” to write three equations in three unknowns (with two additional boundary conditions) that give the probability that the gambler will be ruined. Solve the equations to find the ruin probability.

  7. A gambler plays a coin flipping game in which the probability of winning on a flip is p = 0.4 and the probability of losing on a flip is q = 1 p = 0.6. The gambler wants to reach the victory level of $16 before being ruined with a fortune of $0. The gambler starts with $8, bets $2 on each flip when the fortune is $6, $8, $10 and bets $4 when the fortune is $4 or $12 Compute the probability of ruin in this game.
  8. Prove: In a random walk starting at the origin the probability to reach the point a > 0 before returning to the origin equals p(1 q1).
  9. Prove: In a random walk starting at a > 0 the probability to reach the origin before returning to the starting point equals qqa1.
  10. In the simple case p = 12 = q, conclude from the preceding problem: In a random walk starting at the origin, the number of visits to the point a > 0 that take place before the first return to the origin has a geometric distribution with ratio 1 qqa1. (Why is the condition q p necessary?)
    1. Draw a sample path of a random walk (with p = 12 = q) starting from the origin where the walk visits the position 5 twice before returning to the origin.
    2. Using the results from the previous problems, it can be shown with careful but elementary reasoning that the number of times N that a random walk (p = 12 = q) reaches the value a a total of n times before returning to the origin is a geometric random variable with probability
      N = n = 1 2an 1 1 2a.

      Compute the expected number of visits 𝔼 N to level a.

    3. Compare the expected number of visits of a random walk (with p = 12 = q) to the value 1,000,000 before returning to the origin and to the level 10 before returning to the origin.
  11. This problem is adapted from Stochastic Calculus and Financial Applications by J. Michael Steele, Springer, New York, 2001, Chapter 1, Section 1.6, page 9. Information on buy-backs is adapted from investorwords.com. This problem suggests how results on biased random walks can be worked into more realistic models.

    Consider a naive model for a stock that has a support level of $20/share because of a corporate buy-back program. (This means the company will buy back stock if shares dip below $20 per share. In the case of stocks, this reduces the number of shares outstanding, giving each remaining shareholder a larger percentage ownership of the company. This is usually considered a sign that the company’s management is optimistic about the future and believes that the current share price is undervalued. Reasons for buy-backs include putting unused cash to use, raising earnings per share, increasing internal control of the company, and obtaining stock for employee stock option plans or pension plans.) Suppose also that the stock price moves randomly with a downward bias when the price is above $20, and randomly with an upward bias when the price is below $20. To make the problem concrete, we let Sn denote the stock price at time n, and we express our stock support hypothesis by the assumptions that

    Sn+1 = 21|Sn = 20 = 910 Sn+1 = 19|Sn = 20 = 110

    We then reflect the downward bias at price levels above $20 by requiring that for k > 20:

    Sn+1 = k + 1|Sn = k = 13 Sn+1 = k 1|Sn = k = 23.

    We then reflect the upward bias at price levels below $20 by requiring that for k < 20:

    Sn+1 = k + 1|Sn = k = 23 Sn+1 = k 1|Sn = k = 13

    Using the methods of “single-step analysis” calculate the expected time for the stock to fall from $25 through the support level all the way down to $18. (Because of the varying parameters there is no way to solve this problem using formulas. Instead you will have to go back to basic principles of single-step or first-step analysis to solve the problem.)

  12. Modify the ruin probability scripts to perform simulations of the ruin calculations in the table in the section Some Calculations for Illustration and compare the results.
  13. Perform some simulations of the coin-flipping game, varying p and the start value. How does the value of p affect the experimental probability of victory and ruin?
  14. Modify the simulations by changing the value of p and comparing the experimental results for each starting value to the theoretical ruin function.

__________________________________________________________________________

Books

Reading Suggestion:

References

[1]   William Feller. An Introduction to Probability Theory and Its Applications, Volume I, volume I. John Wiley and Sons, third edition, 1973. QA 273 F3712.

[2]   S. Karlin and H. Taylor. A Second Course in Stochastic Processes. Academic Press, 1981.

[3]   J. Michael Steele. Stochastic Calculus and Financial Applications. Springer-Verlag, 2001. QA 274.2 S 74.

__________________________________________________________________________

Links

Outside Readings and Links:

  1. Virtual Labs in Probability. Games of Chance. Scroll down and select the Red and Black Experiment (marked in red in the Applets Section. Read the description since the scenario is slightly different but equivalent to the description above.)
  2. University of California, San Diego, Department of Mathematics, A.M. Garsia. A java applet that simulates how long it takes for a gambler to go broke. You can control how much money you and the casino start with, the house odds, and the maximum number of games. Results are a graph and a summary table. Submitted by Matt Odell, September 8, 2003.
  3. Eric Weisstein, World of Mathematics. A good description of gambler’s ruin, martingale and many other coin tossing and dice problems and various probability problems Submitted by Yogesh Makkar, September 16th 2003.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable effort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reflects the thoughts, interests and opinions of its author. They do not explicitly represent official positions or policies of my employer.

Information on this website is subject to change without notice.

Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1

Email to Steve Dunbar, sdunbar1 at unl dot edu

Last modified: Processed from LATEX source on July 18, 2016