Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and
Advanced Mathematical Finance

__________________________________________________________________________

The Central Limit Theorem

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

Rating

Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

Section Starter Question

Section Starter Question

What is the most important probability distribution? Why do you choose that distribution as most important?

_______________________________________________________________________________________________

Key Concepts

Key Concepts

  1. The statement, meaning and proof of the Central Limit Theorem.
  2. We expect the normal distribution to arise whenever the numerical description of a state of a system results from numerous small random additive effects, with no single or small group of effects dominant.

__________________________________________________________________________

Vocabulary

Vocabulary

  1. The Central Limit Theorem: Suppose that for a sequence of independent, identically distributed random variables Xi, each Xi has finite variance σ2. Let
    Zn = (Sn nμ)(σn) = (1σ)(Snn μ)n

    and let Z be the “standard” normally distributed random variable with mean 0 and variance 1. Then Zn converges in distribution to Z, that is:

    lim nn Zn a =a 1 2π exp(u22)du.

    In words, a shifted and rescaled sample distribution is approximately standard normal.

__________________________________________________________________________

Mathematical Ideas

Mathematical Ideas

Convergence in Distribution

Lemma 1. Let X1,X2, be a sequence of random variables having cumulative distribution functions FXn and moment generating functions ϕXn. Let X be a random variable having cumulative distribution function FX and moment generating function ϕX. If ϕXn(t) ϕX(t), for all t, then FXn(t) FX(t) for all t at which FX(t) is continuous.

We say that the sequence Xi converges in distribution to X and we write

Xi𝒟X.

Notice that a < Xi b = FXi(b) FXi(a) F(b) F(a) = a < X b, so convergence in distribution implies convergence of probabilities of events. Likewise, convergence of probabilities of events implies convergence in distribution.

This lemma is useful because it is routine to determine the pointwise limit of a sequence of functions using ideas from calculus. It is usually much easier to check the pointwise convergence of the moment generating functions than it is to check the convergence in distribution of the corresponding sequence of random variables.

We won’t prove this lemma, since it would take us too far afield into the theory of moment generating functions and corresponding distribution theorems. However, the proof is a routine application of ideas from the mathematical theory of real analysis.

Application: Weak Law of Large Numbers.

Here’s a simple representative example of using the convergence of the moment generating function to prove a useful result. We will prove a version of the Weak Law of Large numbers that does not require the finite variance of the sequence of independent, identically distributed random variables.

Theorem 2 (Weak Law of Large Numbers). Let X1,,Xn be independent, identically distributed random variables each with mean μ and such that 𝔼 |X| is finite. Let Sn = X1 + + Xn. Then Snn converges in probability to μ. That is:

lim nn |Snn μ| > 𝜖 = 0

Proof. If we denote the moment generating function of X by ϕ(t), then the moment generating function of

Sn n = i=1nXi n

is (ϕ(tn))n. The existence of the first moment assures us that ϕ(t) is differentiable at 0 with a derivative equal to μ. Therefore, by tangent-line approximation (first-degree Taylor polynomial approximation)

ϕ t n = 1 + μ t n + r2(tn)

where r2(tn) is a error term such that

lim nr2(tn) (tn) = 0.

This is equivalent to (1t) lim nnr2(tn) = 0 or just lim nnr2(tn) = 0, needed for taking the limit in (1). Then we need to consider

ϕ t nn = (1 + μ t n + r2(tn))n. (1)

Taking the logarithm of (1 + μ(tn) + r(tn))n and using L’Hospital’s Rule, we see that

ϕ(tn)n exp(μt).

But this last expression is the moment generating function of the (degenerate) point mass distribution concentrated at μ. Hence,

lim nn |Snn μ| > 𝜖 = 0

The Central Limit Theorem

Theorem 3 (Central Limit Theorem). Let random variables X1,Xn

Consider Sn = i=1nX i. Let

Then Zn converges in distribution to Z, that is:

lim nn Zn a =a(12π) exp(u22)du.

Remark. The Central Limit Theorem is true even under the slightly weaker assumptions that X1,Xn only are independent and identically distributed with finite mean μ and finite variance σ2 without the assumption that moment generating function exists. However, the proof below using moment generating functions is simple and direct enough to justify using the additional hypothesis.

Proof. Assume at first that μ = 0 and σ2 = 1. Assume also that the moment generating function of the Xi, (which are identically distributed, so there is only one m.g.f) is ϕX(t), exists and is everywhere finite. Then the m.g.f of Xin is

ϕXn(t) = 𝔼 exp(tXin) = ϕX(tn).

Recall that the m.g.f of a sum of independent r.v.s is the product of the m.g.f.s. Thus the m.g.f of Snn is (note that here we used μ = 0 and σ2 = 1)

ϕSnn(t) = [ϕX(tn)]n

The quadratic approximation (second-degree Taylor polynomial expansion) of ϕX(t) at 0 is by calculus:

ϕX(t) = ϕX(0) + ϕX(0)t + (ϕ X(0)2)t2 + r 3(t) = 1 + t22 + r 3(t)

again since the hypotheses assume 𝔼 X = ϕ(0) = 0 and Var X = 𝔼 X2 (𝔼 X)2 = ϕ(0) (ϕ(0))2 = ϕ(0) = 1. Here r3(t) is an error term such that lim t0r3(t)t2 = 0. Thus,

ϕ(tn) = 1 + t2(2n) + r 3(tn)

implying that

ϕSnn = [1 + t2(2n) + r 3(tn)]n.

Now by some standard results from calculus,

[1 + t2(2n) + r 3(tn)]n exp(t22)

as n . (If the reader needs convincing, it’s easy to show that

n log(1 + t2(2n) + r 3(tn)) t22,

using L’Hospital’s Rule to account for the r3(t) term.)

To handle the general case, consider the standardized random variables (Xi μ)σ, each of which now has mean 0 and variance 1 and apply the result. □

Abraham de Moivre proved the first version of the central limit theorem around 1733 in the special case when the Xi are binomial random variables with p = 12 = q. Pierre-Simon Laplace subsequently extended the proof to the case of arbitrary pq. Laplace also discovered the more general form of the Central Limit Theorem presented here. His proof however was not completely rigorous, and in fact, cannot be made completely rigorous. A truly rigorous proof of the Central Limit Theorem was first presented by the Russian mathematician Aleksandr Liapunov in 1901-1902. As a result, the Central Limit Theorem (or a slightly stronger version of the Central Limit Theorem) is occasionally referred to as Liapunov’s theorem. A theorem with weaker hypotheses but with equally strong conclusion is Lindeberg’s Theorem of 1922. It says that the sequence of random variables need not be identically distributed, but instead need only have zero means, and the individual variances are small compared to their sum.

Accuracy of the Approximation by the Central Limit Theorem

The statement of the Central Limit Theorem does not say how good the approximation is. One rule of thumb is that the approximation given by the Central Limit Theorem applied to a sequence of Bernoulli random trials or equivalently to a binomial random variable is acceptable when np(1 p) > 18 [2, page 34], [3, page 134]. The normal approximation to a binomial deteriorates as the interval (a,b) over which the probability is computed moves away from the binomial’s mean value np. Another rule of thumb is that the normal approximation is acceptable when n 30 for all “reasonable” probability distributions.

The Berry-Esséen Theorem gives an explicit bound: For independent, identically distributed random variables Xi with μ = 𝔼 Xi = 0, σ2 = 𝔼 X i2, and ρ = 𝔼 |X3|, then

Sn(σn) < a a 1 2πeu22du 33 4 ρ σ3 1 n.

Illustration 1

Figure 1 is a graphical illustration of the Central Limit Theorem. More precisely, this is an illustration of the de Moivre-Laplace version, the approximation of the binomial distribution with the normal distribution.


centrallimittheorem-1.png

Figure 1: Approximation of the binomial distribution with the normal distribution.

The figure is actually an non-centered and unscaled illustration since the binomial random variable Sn is not shifted by the mean, nor normalized to unit variance. Therefore, the binomial and the corresponding approximating normal are both centered at 𝔼 Sn = np. The variance of the approximating normal is σ2 = npq and the widths of the bars denoting the binomial probabilities are all unit width, and the heights of the bars are the actual binomial probabilities.

Illustration 2

From the Central Limit Theorem we expect the normal distribution applies whenever an outcome results from numerous small additive effects with no single or small group of effects dominant. Here is a standard illustration of that principle.

Consider the following data from the National Longitudinal Survey of Youth (NLSY). This study started with 12,000 respondents aged 14-21 years in 1979. By 1994, the respondents were 29-36 years old and had 15,000 children among them. Of the respondents 2,444 had exactly two children. In these 2,444 families, the distribution of children was boy-boy: 582; girl-girl 530, boy-girl 666, and girl-boy 666. It appears that the distribution of girl-girl family sequences is low compared to the other combinations. Our intuition tells us that all combinations should be equally likely and should appear in roughly equal proportions. We will assess this intuition with the Central Limit Theorem.

Consider a sequence of 2,444 trials with each of the two-child families. Let Xi = 1 (success) if the two-child family is girl-girl, and Xi = 0 (failure) if the two-child family is otherwise. We are interested in the probability distribution of

S2444 = i=12444X i.

In particular, we are interested in the probability n S2444 530, that is, what is the probability of seeing as few as 530 girl-girl families or even fewer in a sample of 2444 families? We can use the Central Limit Theorem to estimate this probability.

We are assuming the family “success” variables Xi are independent, and identically distributed, a reasonable but arguable assumption. Nevertheless, without this assumption, we cannot justify the use of the Central Limit Theorem, so we adopt the assumption. Then μ = 𝔼 Xi = (14) 1 + (34) 0 = 14 and Var Xi = (14)(34) = 316 so σ = 34 Note that 2444 (14) (34) = 45.75 > 18 so the rule of thumb justifies the use of the Central Limit Theorem. Hence

n S2444 530 = n S2444 2444 (14) (34 2444) 530 2444 (14) (34 2444) Z 3.7838 0.0000772

It is highly unlikely that under our assumptions such a proportion would have occurred. Therefore, we are justified in thinking that under our assumptions, the actual proportion of girl-girl families is low. We then begin to suspect our assumptions, one of which was the implicit assumption that the appearance of girls was equally likely as boys, leading to equal proportions of the four types of families. In fact, there is ample evidence that the birth of boys is more likely than the birth of girls.

Illustration 3

We expect the normal distribution to apply whenever the numerical description of a state of a system results from numerous small additive effects, with no single or small group of effects dominant. Here is another illustration of that principle.

We can us the Central Limit Theorem to assess risk. Two large banks compete for customers to take out loans. The banks have comparable offerings. Assume that each bank has a certain amount of funds available for loans to customers. Any customers seeking a loan beyond the available funds will cost the bank, either as a lost opportunity cost, or because the bank itself has to borrow to secure the funds to lend to the customer. If too few customers take out loans then that also costs the bank since now the bank has unused funds.

We create a simple mathematical model of this situation. We suppose that the loans are all of equal size and for definiteness each bank has funds available for a certain number (to be determined) of these loans. Then suppose n customers select a bank independently and at random. Let Xi = 1 if customer i selects bank H with probability 12 and Xi = 0 if customers select bank T, also with probability 12. Then Sn = i=1nX i is the number of loans from bank H to customers. Now there is some positive probability that more customers will turn up than the bank can accommodate. We can approximate this probability with the Central Limit Theorem:

Sn > s = n (Sn n2)((12)n) > (s n2)((12)n) Z > (s n2)((12)n) = Z > (2s n)n

Now if n is large enough that this probability is less than (say) 0.01, then the number of loans will be sufficient in 99 of 100 cases. Looking up the value in a normal probability table,

2s n n > 2.33

so if n = 1000, then s = 537 will suffice. If both banks assume the same risk of sellout at 0.01, then each will have 537 for a total of 1074 loans, of which 74 will be unused. In the same way, if the bank is willing to assume a risk of 0.20, i.e. having enough loans in 80 of 100 cases, then they would need funds for 514 loans, and if the bank wants to have sufficient loans in 999 out of 1000 cases, the bank should have 549 loans available.

Now the possibilities for generalization and extension are apparent. A first generalization would be allow the loan amounts to be random with some distribution. Still we could apply the Central Limit Theorem to approximate the demand on available funds. Second, the cost of either unused funds or lost business could be multiplied by the chance of occurring. The total of the products would be an expected cost, which could then be minimized.

Sources

The proofs in this section are adapted from Chapter 8, “Limit Theorems”, A First Course in Probability, by Sheldon Ross, Macmillan, 1976. Further examples and considerations come from Heads or Tails: An Introduction to Limit Theorems in Probability, by Emmanuel Lesigne, American Mathematical Society, Chapter 7, pages 29–74. Illustration 1 is adapted from Dicing with Death: Chance, Health, and Risk by Stephen Senn, Cambridge University Press, Cambridge, 2003. Illustration 2 is adapted from An Introduction to Probability Theory and Its Applications, Volume I, second edition, William Feller, J. Wiley and Sons, 1957, Chapter VII.

Algorithms, Scripts, Simulations

Algorithm

The experiment is flipping a coin n times, and repeat the experiment k times. Then compute the proportion for which the deviation of the sample sum from np by more than p(1 p)n is less than a. Compare this to the theoretical probability from the standard normal distribution.

Scripts

Geogebra

GeoGebra script for the Central Limit Theorem.

+
R

R script for the Central Limit Theorem.

1p <- 0.5 
2n <- 10000 
3k <- 1000 
4coinFlips <- array( 0+(runif(n*k) <= p), dim=c(n,k)) 
5     # 0+ coerces Boolean to numeric 
6headsTotal <- colSums(coinFlips) 
7# 0..n binomial rv sample, size k 
8 
9mu <- p 
10sigma <- sqrt(p*(1-p)) 
11a <- 0.5 
12Zn <- (headsTotal - n*mu)/(sigma * sqrt(n)) 
13prob <- sum( 0+(Zn < a) )/k 
14theoretical <- pnorm(a, mean=0, sd=1) 
15cat(sprintf("Empirical probability: %f \n", prob )) 
16cat(sprintf("Central Limit Theorem estimate: %f \n", theoretical))
Octave

Octave script for the Central Limit Theorem.

1p = 0.5; 
2n = 10000; 
3k = 1000; 
4 
5coinFlips = rand(n,k) <= p; 
6headsTotal = sum(coinFlips); 
7# 0..n binomial rv sample, size k 
8 
9mu = p; 
10sigma = sqrt(p*(1-p)); 
11a = 0.5; 
12Zn = (headsTotal - n*mu)/(sigma * sqrt(n)); 
13prob = sum( (Zn < a) )/k; 
14theoretical = stdnormal_cdf(a); 
15disp("Empirical probability:"), disp( prob ) 
16disp("Central Limit Theorem estimate:"), disp( theoretical )
Perl

Perl PDL script for the Central Limit Theorem.

1use PDL::NiceSlice; 
2 
3sub pnorm { 
4    my ( $x, $sigma, $mu ) = @_; 
5    $sigma = 1 unless defined($sigma); 
6    $mu    = 0 unless defined($mu); 
7 
8    return 0.5 * ( 1 + erf( ( $x - $mu ) / ( sqrt(2) * $sigma ) ) ); 
9} 
10 
11$p = 0.5; 
12$n = 10000; 
13$k = 1000; 
14 
15$coinFlips = random( $k, $n ) <= $p; 
16 
17#note order of dims!! 
18$headsTotal = $coinFlips->transpose->sumover; 
19 
20# 0..n binomial r.v. sample, size k 
21#note transpose, PDL likes x (row) direction for implicitly threaded operations 
22 
23$mu    = $p; 
24$sigma = sqrt( $p * ( 1 - $p ) ); 
25$a     = 0.5; 
26$zn    = ( $headsTotal - $n * $mu ) / ( $sigma * sqrt($n) ); 
27 
28$prob = ( ( $zn < $a )->sumover ) / $k; 
29$theoretical = pnorm($a); 
30 
31print "Empirical probability: ",               $prob,        "\n"; 
32print "Moderate Deviations Theorem estimate:", $theoretical, "\n";
SciPy

Scientific Python script for the Central Limit Theorem.

1 
2import scipy 
3 
4p = 0.5 
5n = 10000 
6k = 1000 
7 
8coinFlips = scipy.random.random((n,k))<= p 
9# Note Booleans True for Heads and False for Tails 
10headsTotal = scipy.sum(coinFlips, axis = 0) # 0..n binomial r.v. sample, size k 
11# Note how Booleans act as 0 (False) and 1 (True) 
12 
13mu = p 
14sigma = scipy.sqrt( p * ( 1-p ) ) 
15a = 0.5 
16Zn = (headsTotal - n*mu)/(sigma * scipy.sqrt(n)) 
17 
18prob = (scipy.sum( Zn < a )).astype(float)/k 
19# Note the casting of integer type to float to get float 
20from scipy.stats import norm 
21theoretical = norm.cdf(a) 
22 
23print "Empirical probability: ", prob 
24print "Moderate Deviations Theorem estimate:", theoretical

__________________________________________________________________________

Problems to Work

Problems to Work for Understanding

  1. Let X1,X2,,X10 be independent Poisson random variables with mean 1. First use the Markov Inequality to get a bound on Pr[X1 + + X10 > 15]. Next use the Central Limit theorem to get an estimate of Pr[X1 + + X10 > 15].
  2. A first simple assumption is that the daily change of a company’s stock on the stock market is a random variable with mean 0 and variance σ2. That is, if Sn represents the price of the stock on day n with S0 given, then
    Sn = Sn1 + Xn,n 1

    where X1,X2, are independent, identically distributed continuous random variables with mean 0 and variance σ2. (Note that this is an additive assumption about the change in a stock price. In the binomial tree models, we assumed that a stock’s price changes by a multiplicative factor up or down. We will have more to say about these two distinct models later.) Suppose that a stock’s price today is 100. If σ2 = 1, what can you say about the probability that after 10 days, the stock’s price will be between 95 and 105 on the tenth day?

  3. Suppose you bought a stock at a price b + c, where c > 0 and the present price is b. (Too bad!) You have decided to sell the stock after 30 more trading days have passed. Assume that the daily change of the company’s stock on the stock market is a random variable with mean 0 and variance σ2. That is, if Sn represents the price of the stock on day n with S0 given, then
    Sn = Sn1 + Xn,n 1

    where X1,X2, are independent, identically distributed continuous random variables with mean 0 and variance σ2. Write an expression for the probability that you do not recover your purchase price.

  4. If you buy a lottery ticket in 50 independent lotteries, and in each lottery your chance of winning a prize is 1100, write down and evaluate the probability of winning and also approximate the probability using the Central Limit Theorem.
    1. exactly one prize,
    2. at least one prize,
    3. at least two prizes.

    Explain with a reason whether or not you expect the approximation to be a good approximation.

  5. Find a number k such that the probability is about 0.6 that the number of heads obtained in 1000 tossings of a fair coin will be between 440 and k.
  6. Find the moment generating function ϕX(t) = 𝔼 exp(tX) of the random variable X which takes values 1 with probability 12 and 1 with probability 12. Show directly (that is, without using Taylor polynomial approximations) that ϕX(tn)n exp(t22). (Hint: Use L’Hospital’s Theorem to evaluate the limit, after taking logarithms of both sides.)
  7. A bank has $1,000,000 available to make for car loans. The loans are in random amounts uniformly distributed from $5,000 to $20,000. How many loans can the bank make with 99% confidence that it will have enough money available?
  8. An insurance company is concerned about health insurance claims. Through an extensive audit, the company has determined that overstatements (claims for more health insurance money than is justified by the medical procedures performed) vary randomly with an exponential distribution X with a parameter 1100 which implies that 𝔼 X = 100 and Var X = 1002. The company can afford some overstatements simply because it is cheaper to pay than it is to investigate and counter-claim to recover the overstatement. Given 100 claims in a month, the company wants to know what amount of reserve will give 95% certainty that the overstatements do not exceed the reserve. (All units are in dollars.) What assumptions are you using?
  9. Modify the scripts to vary the upper bounds a and lower bound b (with the other parameters fixed) and observe the difference of the empirical probability and the theoretical probability.
  10. Modify the scripts to vary the probability p (with the other parameters fixed) and observe the difference of the empirical probability and the theoretical probability. Make a conjecture about the difference as a function of p (i.e. where is the difference increasing, decreasing.)
  11. Modify the scripts to vary the number of trials n (with the other parameters fixed) and observe the difference of the empirical probability and the theoretical probability. Test the rate of decrease of the deviation with increasing n. Does it follow the predictions of the Berry-Esséen Theorem?

__________________________________________________________________________

Books

Reading Suggestion:

References

[1]   William Feller. An Introduction to Probability Theory and Its Applications, Volume I, volume I. John Wiley and Sons, third edition, 1973. QA 273 F3712.

[2]   Emmanuel Lesigne. Heads or Tails: An Introduction to Limit Theorems in Probability, volume 28 of Student Mathematical Library. American Mathematical Society, 2005.

[3]   Sheldon Ross. A First Course in Probability. Macmillan, 1976.

[4]   Stephen Senn. Dicing with Death: Chance, Health and Risk. Cambridge University Press, 2003.

__________________________________________________________________________

Links

Outside Readings and Links:

  1. Virtual Laboratories in Probability and Statistics. Search the page for Normal Approximation to the Binomial Distribution and then run the Binomial Timeline Experiment.
  2. Central Limit Theorem explanation. A good visual explanation of the application of the Central Limit Theorem to sampling means.
  3. Central Limit Theorem explanation. Another lecture demonstration of the application of the Central Limit Theorem to sampling means.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable effort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reflects the thoughts, interests and opinions of its author. They do not explicitly represent official positions or policies of my employer.

Information on this website is subject to change without notice.

Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1

Email to Steve Dunbar, sdunbar1 at unl dot edu

Last modified: Processed from LATEX source on July 23, 2016