Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
Stochastic Processes and
Advanced Mathematical Finance
Mathematically Mature: may contain mathematics beyond calculus with proofs.
What is the solution of the recurrence equation where is a constant? What kind of a function is the solution? What more, if anything, needs to be known to obtain a complete solution?
We consider a sequence of Bernoulli random variables, where with probability and with probability . We start with an initial value . We deﬁne the sequence of sums . We are interested in the stochastic process . It turns out this is a complicated sequence to understand in full, so we single out particular simpler features to understand ﬁrst. For example, we can look at the probability that the process will achieve the value before it achieves the value . This is a special case of a larger class of probability problems called ﬁrst-passage probabilities.
Consider a gambler who wins or loses a dollar on each turn of a game with probabilities and respectively. Let his initial capital be . The game continues until the gambler’s capital either is reduced to or has increased to . Let be the probability of the gambler’s ultimate ruin and the probability of his winning. We shall show later that (see also Duration of the Game Until Ruin..)
so that we need not consider the possibility of an unending game.
Proof. The proof uses a ﬁrst step analysis considering how the probabilities change after one step or trial. After the ﬁrst trial the gambler’s fortune is either or and therefore we must have
provided . For , the ﬁrst trial may lead to ruin, and (1) is replaced by
Similarly, for the ﬁrst trial may result in victory, and therefore
To unify our equations, we deﬁne as a natural convention that , and . Then the probability of ruin satisﬁes (1) for . This deﬁnes a set of diﬀerence equations, with boundary conditions at and . If we solve the system of diﬀerence equations, then we will have the desired probability for any value of .
Note that we can rewrite the diﬀerence equations as
Then we can rearrange and factor to obtain
This says the ratio of successive diﬀerences of is constant. This suggests that is a power function,
since power functions have this property.
We ﬁrst take the case when . Then based on the guess above (or also on standard theory for linear diﬀerence equations), we try a solution of the form . That is
This reduces to
Since , this factors as
so the solutions are , and . (One could also use the quadratic formula to obtain the same values.) Again by the standard theory of linear diﬀerence equations, the general solution is
for some constants , and .
Now we determine the constants by using the boundary conditions:
Solving, substituting, and simplifying:
(Check for yourself that with this expression as it should be a for a probability.)
We should show that the solution is unique. So suppose is another solution of the diﬀerence equations. Given an arbitrary solution of (1), the two constants and can be determined so that (2) agrees with at and . (The reader should be able to explain why by reference to a theorem in Linear Algebra!) From these two values, all other values can be found by substituting in (1) successively Therefore, two solutions which agree for and are identical, hence every solution is of the form (2).
The solution breaks down if , since then we do not get two linearly independent solutions of the diﬀerence equation (we get the solution repeated twice). Instead, we need to borrow a result from diﬀerential equations (from the variation-of-parameters/reduction-of-order set of ideas used to derive a complete linearly independent set of solutions.) Certainly, is still a solution of the diﬀerence equation (1). A second linearly independent solution is , (check it out!) and the general solution is . To satisfy the boundary conditions, we must put , and , hence . □
We can consider a symmetric interpretation of this gambling game. Instead of a single gambler playing at a casino, trying to make a goal before being ruined, consider two gamblers Alice and Bill playing against each other. Let Alice’s initial capital be and let her play against adversary Bill with initial capital so that their combined capital is . The game continues until one gambler’s capital either is reduced to zero or has increased to , that is, until one of the two players is ruined.
Proof. The probability of Alice’s winning the game equals the probability of Bill’s ruin. Bill’s ruin (and Alice’s victory) is therefore obtained from our ruin formulas on replacing , , and by , , and respectively. That is, from our formula (for ) the probability of Alice’s ruin is
and the probability of Bill’s ruin is
Then add these together, and after some algebra, the total is . (Check it out!)
For , the proof is simpler, since then , and , and easily. □
Proof. In the game, the gambler’s ultimate gain (or loss!) is a Bernoulli (two-valued) random variable, , where assumes the value with probability , and assumes the value with probability . Thus the expected value is□
Now notice that if , so that we are dealing with a fair game, then . That is, a fair game in the short run (one trial) is a fair game in the long run (expected value). However, if , so the game is not fair then our expectation formula says
The sequence is an increasing sequence, so
This shows that an unfair game in the short run (one trial) is an unfair game in the long run.
Proof. Let in the formulas. (Check it out!) □
Remark. This corollary says that the probability of “breaking the bank at Monte Carlo” as in the movies is zero, at least for the simple games we are considering.
|of Ruin||of Victory||Gain|
We often hear from people who consistently win at the casino. How can this be in the face of the theorems above?
A simple illustration makes clear how this is possible. Assume for convenience a gambler who repeatedly visits the casino, each time with a certain amount of capital. His goal is to win 1/9 of his capital. That is, in units of his initial capital , and . Assume too that the casino is fair so that , then the probability of ruin in any one year is:
That is, if the working capital is much greater than the amount required for victory, then the probability of ruin is reasonably small.
Then the probability of an unbroken string of ten successes in ten years is:
This much success is reasonable, but simple psychology would suggest the gambler would boast about his skill instead of crediting it to luck. Moreover, simple psychology suggests the gambler would also blame one failure on oversight, momentary distraction, or even cheating by the casino!
Another common interpretation of this probability game is to imagine it as a random walk. That is, we imagine an individual on a number line, starting at some position . The person takes a step to the right to with probability and takes a step to the left to with probability and continues this random process. Then instead of the total fortune at any time, we consider the geometric position on the line at any time. Instead of reaching ﬁnancial ruin or attaining a ﬁnancial goal, we talk instead about reaching or passing a certain position. For example, Corollary 3 says that if , then the probability of visiting the origin before going to inﬁnity is . The two interpretations are equivalent and either can be used depending on which is more useful. The problems below use the random walk interpretation, because they are more naturally posed in terms of reaching or passing certain points on the number line.
The fortune in the coin-tossing game is the ﬁrst and simplest encounter with two of the most important ideas in modern probability theory.
We can interpret the fortune in our gambler’s coin-tossing game as a Markov process. That is, at successive times the process is in various states. In our case, the states are the values of the fortune. The probability of passing from one state at the current time to another state at time is completely determined by the present state. That is, for our coin-tossing game
The most important property of a Markov process is that the probability of being in the next state is completely determined by the current state and not the history of how the process arrived at the current state. In that sense, we often say that a Markov process is memory-less.
We can also note the fair coin-tossing game with is a martingale. That is, the expected value of the process at the next step is the current value. Using expectation for estimation, the best estimate we have of the gambler’s fortune at the next step is the current fortune:
This characterizes a fair game, after the next step, one can neither expect to be richer or poorer. Note that the coin-tossing games with do not have this property.
In later sections we have more occasions to study the properties of martingales, and to a lesser degree Markov processes.
This section is adapted from W. Feller, in Introduction to Probability Theory and Applications, Volume I, Chapter XIV, page 342, . Some material is adapted from  and . Steele has an excellent discussion at about the same level as here, but with a slightly more rigorous approach to solving the diﬀerence equations. He also gives more information about the fact that the duration is almost surely ﬁnite, showing that all moments of the duration are ﬁnite. Karlin and Taylor give a treatment of the ruin problem by direct application of Markov chain analysis, which is not essentially diﬀerent, but points to greater generality.
The goal is to simulate the probability function for ruin with a given starting value. First set the probability , number of Bernoulli trials , and number of experiments . Set the ruin and victory values and , the boundaries for the random walk. For each starting value from ruin to victory, ﬁll an matrix with the Bernoulli random variables. For languages with multi-dimensional arrays each the data is kept in a three-dimensional array of size . Cumulatively sum the Bernoulli random variables to create the fortune or random walk. For each starting value, for each random walk or fortune path, ﬁnd the step where ruin or victory is encountered. For each starting value, ﬁnd the proportion of fortunes encountering ruin. Finally, ﬁnd a least squares linear ﬁt of the ruin probabilities as a function of the starting value.
R script for ruin probabilities.
Use “ﬁrst step analysis” to write three equations in three unknowns (with two additional boundary conditions) that give the probability that the gambler will be ruined. Solve the equations to ﬁnd the ruin probability.
Compute the expected number of visits to level .
Consider a naive model for a stock that has a support level of $20/share because of a corporate buy-back program. (This means the company will buy back stock if shares dip below $20 per share. In the case of stocks, this reduces the number of shares outstanding, giving each remaining shareholder a larger percentage ownership of the company. This is usually considered a sign that the company’s management is optimistic about the future and believes that the current share price is undervalued. Reasons for buy-backs include putting unused cash to use, raising earnings per share, increasing internal control of the company, and obtaining stock for employee stock option plans or pension plans.) Suppose also that the stock price moves randomly with a downward bias when the price is above $20, and randomly with an upward bias when the price is below $20. To make the problem concrete, we let denote the stock price at time , and we express our stock support hypothesis by the assumptions that
We then reﬂect the downward bias at price levels above $20 by requiring that for :
We then reﬂect the upward bias at price levels below $20 by requiring that for :
Using the methods of “single-step analysis” calculate the expected time for the stock to fall from $25 through the support level all the way down to $18. (Because of the varying parameters there is no way to solve this problem using formulas. Instead you will have to go back to basic principles of single-step or ﬁrst-step analysis to solve the problem.)
I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.
I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.
Information on this website is subject to change without notice.
Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1
Email to Steve Dunbar, sdunbar1 at unl dot edu
Last modiﬁed: Processed from LATEX source on July 18, 2016