Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and

__________________________________________________________________________

The Central Limit Theorem

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

Section Starter Question

What is the most important probability distribution? Why do you choose that distribution as most important?

_______________________________________________________________________________________________

Key Concepts

1. The statement, meaning and proof of the Central Limit Theorem.
2. We expect the normal distribution to arise whenever the numerical description of a state of a system results from numerous small random additive eﬀects, with no single or small group of eﬀects dominant.

__________________________________________________________________________

Vocabulary

1. The Central Limit Theorem: Suppose that for a sequence of independent, identically distributed random variables ${X}_{i}$, each ${X}_{i}$ has ﬁnite variance ${\sigma }^{2}$. Let
${Z}_{n}=\left({S}_{n}-n\mu \right)∕\left(\sigma \sqrt{n}\right)=\left(1∕\sigma \right)\left({S}_{n}∕n-\mu \right)\sqrt{n}$

and let $Z$ be the “standard” normally distributed random variable with mean $0$ and variance $1$. Then ${Z}_{n}$ converges in distribution to $Z$, that is:

$\underset{n\to \infty }{lim}{ℙ}_{n}\left[{Z}_{n}\le a\right]={\int }_{-\infty }^{a}\frac{1}{\sqrt{2\pi }}exp\left(-{u}^{2}∕2\right)\phantom{\rule{3.26288pt}{0ex}}du.$

In words, a shifted and rescaled sample distribution is approximately standard normal.

__________________________________________________________________________

Mathematical Ideas

Convergence in Distribution

Lemma 1. Let ${X}_{1},{X}_{2},\dots$ be a sequence of random variables having cumulative distribution functions ${F}_{{X}_{n}}$ and moment generating functions ${\varphi }_{{X}_{n}}$. Let $X$ be a random variable having cumulative distribution function ${F}_{X}$ and moment generating function ${\varphi }_{X}$. If ${\varphi }_{{X}_{n}}\left(t\right)\to {\varphi }_{X}\left(t\right)$, for all $t$, then ${F}_{{X}_{n}}\left(t\right)\to {F}_{X}\left(t\right)$ for all $t$ at which ${F}_{X}\left(t\right)$ is continuous.

We say that the sequence ${X}_{i}$ converges in distribution to $X$ and we write

${X}_{i}\stackrel{\mathsc{𝒟}}{\to }X.$

Notice that $ℙ\left[a<{X}_{i}\le b\right]={F}_{{X}_{i}}\left(b\right)-{F}_{{X}_{i}}\left(a\right)\to F\left(b\right)-F\left(a\right)=ℙ\left[a, so convergence in distribution implies convergence of probabilities of events. Likewise, convergence of probabilities of events implies convergence in distribution.

This lemma is useful because it is routine to determine the pointwise limit of a sequence of functions using ideas from calculus. It is usually much easier to check the pointwise convergence of the moment generating functions than it is to check the convergence in distribution of the corresponding sequence of random variables.

We won’t prove this lemma, since it would take us too far aﬁeld into the theory of moment generating functions and corresponding distribution theorems. However, the proof is a routine application of ideas from the mathematical theory of real analysis.

Application: Weak Law of Large Numbers.

Here’s a simple representative example of using the convergence of the moment generating function to prove a useful result. We will prove a version of the Weak Law of Large numbers that does not require the ﬁnite variance of the sequence of independent, identically distributed random variables.

Theorem 2 (Weak Law of Large Numbers). Let ${X}_{1},\dots ,{X}_{n}$ be independent, identically distributed random variables each with mean $\mu$ and such that $𝔼\left[|X|\right]$ is ﬁnite. Let ${S}_{n}={X}_{1}+\cdots +{X}_{n}$. Then ${S}_{n}∕n$ converges in probability to $\mu$. That is:

$\underset{n\to \infty }{lim}{ℙ}_{n}\left[|{S}_{n}∕n-\mu |>𝜖\right]=0$

Proof. If we denote the moment generating function of $X$ by $\varphi \left(t\right)$, then the moment generating function of

$\frac{{S}_{n}}{n}=\sum _{i=1}^{n}\frac{{X}_{i}}{n}$

is ${\left(\varphi \left(t∕n\right)\right)}^{n}$. The existence of the ﬁrst moment assures us that $\varphi \left(t\right)$ is diﬀerentiable at $0$ with a derivative equal to $\mu$. Therefore, by tangent-line approximation (ﬁrst-degree Taylor polynomial approximation)

$\varphi \left(\frac{t}{n}\right)=1+\mu \frac{t}{n}+{r}_{2}\left(t∕n\right)$

where ${r}_{2}\left(t∕n\right)$ is a error term such that

$\underset{n\to \infty }{lim}\frac{{r}_{2}\left(t∕n\right)}{\left(t∕n\right)}=0.$

This is equivalent to $\left(1∕t\right)\underset{n\to \infty }{lim}n{r}_{2}\left(t∕n\right)=0$ or just $\underset{n\to \infty }{lim}n{r}_{2}\left(t∕n\right)=0$, needed for taking the limit in (1). Then we need to consider

 $\varphi {\left(\frac{t}{n}\right)}^{n}={\left(1+\mu \frac{t}{n}+{r}_{2}\left(t∕n\right)\right)}^{n}.$ (1)

Taking the logarithm of ${\left(1+\mu \left(t∕n\right)+r\left(t∕n\right)\right)}^{n}$ and using L’Hospital’s Rule, we see that

$\varphi {\left(t∕n\right)}^{n}\to exp\left(\mu t\right).$

But this last expression is the moment generating function of the (degenerate) point mass distribution concentrated at $\mu$. Hence,

$\underset{n\to \infty }{lim}{ℙ}_{n}\left[|{S}_{n}∕n-\mu |>𝜖\right]=0$

The Central Limit Theorem

Theorem 3 (Central Limit Theorem). Let random variables ${X}_{1},\dots {X}_{n}$

• be independent and identically distributed;
• have common mean $𝔼\left[{X}_{i}\right]=\mu$ and common variance $Var\left[{X}_{i}\right]={\sigma }^{2}$; and
• the common moment generating function ${\varphi }_{{X}_{i}}\left(t\right)=𝔼\left[{e}^{t{x}_{i}}\right]$ exists and is ﬁnite in a neighborhood of $t=0$.

Consider ${S}_{n}={\sum }_{i=1}^{n}{X}_{i}.$ Let

• ${Z}_{n}=\left({S}_{n}-n\mu \right)∕\left(\sigma \sqrt{n}\right)=\left(1∕\sigma \right)\left({S}_{n}∕n-\mu \right)\sqrt{n};and$

• $Z$ be the standard normally distributed random variable with mean $0$ and variance $1$.

Then ${Z}_{n}$ converges in distribution to $Z$, that is:

$\underset{n\to \infty }{lim}{ℙ}_{n}\left[{Z}_{n}\le a\right]={\int }_{-\infty }^{a}\left(1∕\sqrt{2\pi }\right)exp\left(-{u}^{2}∕2\right)\phantom{\rule{0.3em}{0ex}}du.$

Remark. The Central Limit Theorem is true even under the slightly weaker assumptions that ${X}_{1},\dots {X}_{n}$ only are independent and identically distributed with ﬁnite mean $\mu$ and ﬁnite variance ${\sigma }^{2}$ without the assumption that moment generating function exists. However, the proof below using moment generating functions is simple and direct enough to justify using the additional hypothesis.

Proof. Assume at ﬁrst that $\mu =0$ and ${\sigma }^{2}=1$. Assume also that the moment generating function of the ${X}_{i}$, (which are identically distributed, so there is only one m.g.f) is ${\varphi }_{X}\left(t\right)$, exists and is everywhere ﬁnite. Then the m.g.f of ${X}_{i}∕\sqrt{n}$ is

${\varphi }_{X∕\sqrt{n}}\left(t\right)=𝔼\left[exp\left(t{X}_{i}∕\sqrt{n}\right)\right]={\varphi }_{X}\left(t∕\sqrt{n}\right).$

Recall that the m.g.f of a sum of independent r.v.s is the product of the m.g.f.s. Thus the m.g.f of ${S}_{n}∕\sqrt{n}$ is (note that here we used $\mu =0$ and ${\sigma }^{2}=1$)

${\varphi }_{{S}_{n}∕\sqrt{n}}\left(t\right)={\left[{\varphi }_{X}\left(t∕\sqrt{n}\right)\right]}^{n}$

The quadratic approximation (second-degree Taylor polynomial expansion) of ${\varphi }_{X}\left(t\right)$ at $0$ is by calculus:

${\varphi }_{X}\left(t\right)={\varphi }_{X}\left(0\right)+{\varphi }_{X}^{\prime }\left(0\right)t+\left({\varphi }_{X}^{″}\left(0\right)∕2\right){t}^{2}+{r}_{3}\left(t\right)=1+{t}^{2}∕2+{r}_{3}\left(t\right)$

again since the hypotheses assume $𝔼\left[X\right]={\varphi }^{\prime }\left(0\right)=0$ and $Var\left[X\right]=𝔼\left[{X}^{2}\right]-{\left(𝔼\left[X\right]\right)}^{2}={\varphi }^{″}\left(0\right)-{\left({\varphi }^{\prime }\left(0\right)\right)}^{2}={\varphi }^{″}\left(0\right)=1$. Here ${r}_{3}\left(t\right)$ is an error term such that $\underset{t\to 0}{lim}{r}_{3}\left(t\right)∕{t}^{2}=0$. Thus,

$\varphi \left(t∕\sqrt{n}\right)=1+{t}^{2}∕\left(2n\right)+{r}_{3}\left(t∕\sqrt{n}\right)$

implying that

${\varphi }_{{S}_{n}∕\sqrt{n}}={\left[1+{t}^{2}∕\left(2n\right)+{r}_{3}\left(t∕\sqrt{n}\right)\right]}^{n}.$

Now by some standard results from calculus,

${\left[1+{t}^{2}∕\left(2n\right)+{r}_{3}\left(t∕\sqrt{n}\right)\right]}^{n}\to exp\left({t}^{2}∕2\right)$

as $n\to \infty$. (If the reader needs convincing, it’s easy to show that

$nlog\left(1+{t}^{2}∕\left(2n\right)+{r}_{3}\left(t∕\sqrt{n}\right)\right)\to {t}^{2}∕2,$

using L’Hospital’s Rule to account for the ${r}_{3}\left(t\right)$ term.)

To handle the general case, consider the standardized random variables $\left({X}_{i}-\mu \right)∕\sigma$, each of which now has mean $0$ and variance $1$ and apply the result. □

Abraham de Moivre proved the ﬁrst version of the central limit theorem around 1733 in the special case when the ${X}_{i}$ are binomial random variables with $p=1∕2=q$. Pierre-Simon Laplace subsequently extended the proof to the case of arbitrary $p\ne q$. Laplace also discovered the more general form of the Central Limit Theorem presented here. His proof however was not completely rigorous, and in fact, cannot be made completely rigorous. A truly rigorous proof of the Central Limit Theorem was ﬁrst presented by the Russian mathematician Aleksandr Liapunov in 1901-1902. As a result, the Central Limit Theorem (or a slightly stronger version of the Central Limit Theorem) is occasionally referred to as Liapunov’s theorem. A theorem with weaker hypotheses but with equally strong conclusion is Lindeberg’s Theorem of 1922. It says that the sequence of random variables need not be identically distributed, but instead need only have zero means, and the individual variances are small compared to their sum.

Accuracy of the Approximation by the Central Limit Theorem

The statement of the Central Limit Theorem does not say how good the approximation is. One rule of thumb is that the approximation given by the Central Limit Theorem applied to a sequence of Bernoulli random trials or equivalently to a binomial random variable is acceptable when $np\left(1-p\right)>18$ [2, page 34], [3, page 134]. The normal approximation to a binomial deteriorates as the interval $\left(a,b\right)$ over which the probability is computed moves away from the binomial’s mean value $np$. Another rule of thumb is that the normal approximation is acceptable when $n\ge 30$ for all “reasonable” probability distributions.

The Berry-Esséen Theorem gives an explicit bound: For independent, identically distributed random variables ${X}_{i}$ with $\mu =𝔼\left[{X}_{i}\right]=0$, ${\sigma }^{2}=𝔼\left[{X}_{i}^{2}\right]$, and $\rho =𝔼\left[|{X}^{3}|\right]$, then

$\left|ℙ\left[{S}_{n}∕\left(\sigma \sqrt{n}\right)

Illustration 1

Figure 1 is a graphical illustration of the Central Limit Theorem. More precisely, this is an illustration of the de Moivre-Laplace version, the approximation of the binomial distribution with the normal distribution.

The ﬁgure is actually an non-centered and unscaled illustration since the binomial random variable ${S}_{n}$ is not shifted by the mean, nor normalized to unit variance. Therefore, the binomial and the corresponding approximating normal are both centered at $𝔼\left[{S}_{n}\right]=np$. The variance of the approximating normal is ${\sigma }^{2}=\sqrt{npq}$ and the widths of the bars denoting the binomial probabilities are all unit width, and the heights of the bars are the actual binomial probabilities.

Illustration 2

From the Central Limit Theorem we expect the normal distribution applies whenever an outcome results from numerous small additive eﬀects with no single or small group of eﬀects dominant. Here is a standard illustration of that principle.

Consider the following data from the National Longitudinal Survey of Youth (NLSY). This study started with 12,000 respondents aged 14-21 years in 1979. By 1994, the respondents were 29-36 years old and had 15,000 children among them. Of the respondents 2,444 had exactly two children. In these 2,444 families, the distribution of children was boy-boy: 582; girl-girl 530, boy-girl 666, and girl-boy 666. It appears that the distribution of girl-girl family sequences is low compared to the other combinations. Our intuition tells us that all combinations should be equally likely and should appear in roughly equal proportions. We will assess this intuition with the Central Limit Theorem.

Consider a sequence of 2,444 trials with each of the two-child families. Let ${X}_{i}=1$ (success) if the two-child family is girl-girl, and ${X}_{i}=0$ (failure) if the two-child family is otherwise. We are interested in the probability distribution of

${S}_{2444}=\sum _{i=1}^{2444}{X}_{i}.$

In particular, we are interested in the probability ${ℙ}_{n}\left[{S}_{2444}\le 530\right]$, that is, what is the probability of seeing as few as 530 girl-girl families or even fewer in a sample of 2444 families? We can use the Central Limit Theorem to estimate this probability.

We are assuming the family “success” variables ${X}_{i}$ are independent, and identically distributed, a reasonable but arguable assumption. Nevertheless, without this assumption, we cannot justify the use of the Central Limit Theorem, so we adopt the assumption. Then $\mu =𝔼\left[{X}_{i}\right]=\left(1∕4\right)\cdot 1+\left(3∕4\right)\cdot 0=1∕4$ and $Var\left[{X}_{i}\right]=\left(1∕4\right)\left(3∕4\right)=3∕16$ so $\sigma =\sqrt{3}∕4$ Note that $2444\cdot \left(1∕4\right)\cdot \left(3∕4\right)=45.75>18$ so the rule of thumb justiﬁes the use of the Central Limit Theorem. Hence

$\begin{array}{llll}\hfill {ℙ}_{n}\left[{S}_{2444}\le 530\right]& ={ℙ}_{n}\left[\frac{{S}_{2444}-2444\cdot \left(1∕4\right)}{\left(\sqrt{3}∕4\cdot \sqrt{2444}\right)}\le \frac{530-2444\cdot \left(1∕4\right)}{\left(\sqrt{3}∕4\cdot \sqrt{2444}\right)}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \approx ℙ\left[Z\le -3.7838\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \approx 0.0000772\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

It is highly unlikely that under our assumptions such a proportion would have occurred. Therefore, we are justiﬁed in thinking that under our assumptions, the actual proportion of girl-girl families is low. We then begin to suspect our assumptions, one of which was the implicit assumption that the appearance of girls was equally likely as boys, leading to equal proportions of the four types of families. In fact, there is ample evidence that the birth of boys is more likely than the birth of girls.

Illustration 3

We expect the normal distribution to apply whenever the numerical description of a state of a system results from numerous small additive eﬀects, with no single or small group of eﬀects dominant. Here is another illustration of that principle.

We can us the Central Limit Theorem to assess risk. Two large banks compete for customers to take out loans. The banks have comparable oﬀerings. Assume that each bank has a certain amount of funds available for loans to customers. Any customers seeking a loan beyond the available funds will cost the bank, either as a lost opportunity cost, or because the bank itself has to borrow to secure the funds to lend to the customer. If too few customers take out loans then that also costs the bank since now the bank has unused funds.

We create a simple mathematical model of this situation. We suppose that the loans are all of equal size and for deﬁniteness each bank has funds available for a certain number (to be determined) of these loans. Then suppose $n$ customers select a bank independently and at random. Let ${X}_{i}=1$ if customer $i$ selects bank H with probability $1∕2$ and ${X}_{i}=0$ if customers select bank T, also with probability $1∕2$. Then ${S}_{n}={\sum }_{i=1}^{n}{X}_{i}$ is the number of loans from bank H to customers. Now there is some positive probability that more customers will turn up than the bank can accommodate. We can approximate this probability with the Central Limit Theorem:

$\begin{array}{llll}\hfill ℙ\left[{S}_{n}>s\right]& ={ℙ}_{n}\left[\left({S}_{n}-n∕2\right)∕\left(\left(1∕2\right)\sqrt{n}\right)>\left(s-n∕2\right)∕\left(\left(1∕2\right)\sqrt{n}\right)\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \approx ℙ\left[Z>\left(s-n∕2\right)∕\left(\left(1∕2\right)\sqrt{n}\right)\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =ℙ\left[Z>\left(2s-n\right)∕\sqrt{n}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Now if $n$ is large enough that this probability is less than (say) $0.01$, then the number of loans will be suﬃcient in 99 of 100 cases. Looking up the value in a normal probability table,

$\frac{2s-n}{\sqrt{n}}>2.33$

so if $n=1000$, then $s=537$ will suﬃce. If both banks assume the same risk of sellout at $0.01$, then each will have $537$ for a total of 1074 loans, of which 74 will be unused. In the same way, if the bank is willing to assume a risk of $0.20$, i.e. having enough loans in 80 of 100 cases, then they would need funds for 514 loans, and if the bank wants to have suﬃcient loans in 999 out of 1000 cases, the bank should have 549 loans available.

Now the possibilities for generalization and extension are apparent. A ﬁrst generalization would be allow the loan amounts to be random with some distribution. Still we could apply the Central Limit Theorem to approximate the demand on available funds. Second, the cost of either unused funds or lost business could be multiplied by the chance of occurring. The total of the products would be an expected cost, which could then be minimized.

Sources

The proofs in this section are adapted from Chapter 8, “Limit Theorems”, A First Course in Probability, by Sheldon Ross, Macmillan, 1976. Further examples and considerations come from Heads or Tails: An Introduction to Limit Theorems in Probability, by Emmanuel Lesigne, American Mathematical Society, Chapter 7, pages 29–74. Illustration 1 is adapted from Dicing with Death: Chance, Health, and Risk by Stephen Senn, Cambridge University Press, Cambridge, 2003. Illustration 2 is adapted from An Introduction to Probability Theory and Its Applications, Volume I, second edition, William Feller, J. Wiley and Sons, 1957, Chapter VII.

Algorithms, Scripts, Simulations

Algorithm

The experiment is ﬂipping a coin $n$ times, and repeat the experiment $k$ times. Then compute the proportion for which the deviation of the sample sum from $np$ by more than $\sqrt{p\left(1-p\right)n}$ is less than $a$. Compare this to the theoretical probability from the standard normal distribution.

Scripts

Geogebra
+
R
1p <- 0.5
2n <- 10000
3k <- 1000
4coinFlips <- array( 0+(runif(n*k) <= p), dim=c(n,k))
5     # 0+ coerces Boolean to numeric
7# 0..n binomial rv sample, size k
8
9mu <- p
10sigma <- sqrt(p*(1-p))
11a <- 0.5
12Zn <- (headsTotal - n*mu)/(sigma * sqrt(n))
13prob <- sum( 0+(Zn < a) )/k
14theoretical <- pnorm(a, mean=0, sd=1)
15cat(sprintf("Empirical probability: %f \n", prob ))
16cat(sprintf("Central Limit Theorem estimate: %f \n", theoretical))
Octave
1p = 0.5;
2n = 10000;
3k = 1000;
4
5coinFlips = rand(n,k) <= p;
7# 0..n binomial rv sample, size k
8
9mu = p;
10sigma = sqrt(p*(1-p));
11a = 0.5;
12Zn = (headsTotal - n*mu)/(sigma * sqrt(n));
13prob = sum( (Zn < a) )/k;
14theoretical = stdnormal_cdf(a);
15disp("Empirical probability:"), disp( prob )
16disp("Central Limit Theorem estimate:"), disp( theoretical )
Perl
1use PDL::NiceSlice;
2
3sub pnorm {
4    my ( $x,$sigma, $mu ) = @_; 5$sigma = 1 unless defined($sigma); 6$mu    = 0 unless defined($mu); 7 8 return 0.5 * ( 1 + erf( ($x - $mu ) / ( sqrt(2) *$sigma ) ) );
9}
10
11$p = 0.5; 12$n = 10000;
13$k = 1000; 14 15$coinFlips = random( $k,$n ) <= $p; 16 17#note order of dims!! 18$headsTotal = $coinFlips->transpose->sumover; 19 20# 0..n binomial r.v. sample, size k 21#note transpose, PDL likes x (row) direction for implicitly threaded operations 22 23$mu    = $p; 24$sigma = sqrt( $p * ( 1 -$p ) );
25$a = 0.5; 26$zn    = ( $headsTotal -$n * $mu ) / ($sigma * sqrt($n) ); 27 28$prob = ( ( $zn <$a )->sumover ) / $k; 29$theoretical = pnorm($a); 30 31print "Empirical probability: ",$prob,        "\n";
32print "Moderate Deviations Theorem estimate:", $theoretical, "\n"; SciPy 1 2import scipy 3 4p = 0.5 5n = 10000 6k = 1000 7 8coinFlips = scipy.random.random((n,k))<= p 9# Note Booleans True for Heads and False for Tails 10headsTotal = scipy.sum(coinFlips, axis = 0) # 0..n binomial r.v. sample, size k 11# Note how Booleans act as 0 (False) and 1 (True) 12 13mu = p 14sigma = scipy.sqrt( p * ( 1-p ) ) 15a = 0.5 16Zn = (headsTotal - n*mu)/(sigma * scipy.sqrt(n)) 17 18prob = (scipy.sum( Zn < a )).astype(float)/k 19# Note the casting of integer type to float to get float 20from scipy.stats import norm 21theoretical = norm.cdf(a) 22 23print "Empirical probability: ", prob 24print "Moderate Deviations Theorem estimate:", theoretical __________________________________________________________________________ Problems to Work for Understanding 1. Let ${X}_{1},{X}_{2},\dots ,{X}_{10}$ be independent Poisson random variables with mean $1$. First use the Markov Inequality to get a bound on $Pr\left[{X}_{1}+\cdots +{X}_{10}>15\right]$. Next use the Central Limit theorem to get an estimate of $Pr\left[{X}_{1}+\cdots +{X}_{10}>15\right]$. 2. A ﬁrst simple assumption is that the daily change of a company’s stock on the stock market is a random variable with mean $0$ and variance ${\sigma }^{2}$. That is, if ${S}_{n}$ represents the price of the stock on day $n$ with ${S}_{0}$ given, then ${S}_{n}={S}_{n-1}+{X}_{n},n\ge 1$ where ${X}_{1},{X}_{2},\dots$ are independent, identically distributed continuous random variables with mean $0$ and variance ${\sigma }^{2}$. (Note that this is an additive assumption about the change in a stock price. In the binomial tree models, we assumed that a stock’s price changes by a multiplicative factor up or down. We will have more to say about these two distinct models later.) Suppose that a stock’s price today is $100$. If ${\sigma }^{2}=1$, what can you say about the probability that after $10$ days, the stock’s price will be between $95$ and $105$ on the tenth day? 3. Suppose you bought a stock at a price $b+c$, where $c>0$ and the present price is $b$. (Too bad!) You have decided to sell the stock after 30 more trading days have passed. Assume that the daily change of the company’s stock on the stock market is a random variable with mean $0$ and variance ${\sigma }^{2}$. That is, if ${S}_{n}$ represents the price of the stock on day $n$ with ${S}_{0}$ given, then ${S}_{n}={S}_{n-1}+{X}_{n},n\ge 1$ where ${X}_{1},{X}_{2},\dots$ are independent, identically distributed continuous random variables with mean $0$ and variance ${\sigma }^{2}$. Write an expression for the probability that you do not recover your purchase price. 4. If you buy a lottery ticket in 50 independent lotteries, and in each lottery your chance of winning a prize is $1∕100$, write down and evaluate the probability of winning and also approximate the probability using the Central Limit Theorem. 1. exactly one prize, 2. at least one prize, 3. at least two prizes. Explain with a reason whether or not you expect the approximation to be a good approximation. 5. Find a number $k$ such that the probability is about $0.6$ that the number of heads obtained in $1000$ tossings of a fair coin will be between $440$ and $k$. 6. Find the moment generating function ${\varphi }_{X}\left(t\right)=𝔼\left[exp\left(tX\right)\right]$ of the random variable $X$ which takes values $1$ with probability $1∕2$ and $-1$ with probability $1∕2$. Show directly (that is, without using Taylor polynomial approximations) that ${\varphi }_{X}{\left(t∕\sqrt{n}\right)}^{n}\to exp\left({t}^{2}∕2\right)$. (Hint: Use L’Hospital’s Theorem to evaluate the limit, after taking logarithms of both sides.) 7. A bank has$1,000,000 available to make for car loans. The loans are in random amounts uniformly distributed from $5,000 to$20,000. How many loans can the bank make with 99% conﬁdence that it will have enough money available?
8. An insurance company is concerned about health insurance claims. Through an extensive audit, the company has determined that overstatements (claims for more health insurance money than is justiﬁed by the medical procedures performed) vary randomly with an exponential distribution $X$ with a parameter $1∕100$ which implies that $𝔼\left[X\right]=100$ and $Var\left[X\right]=10{0}^{2}$. The company can aﬀord some overstatements simply because it is cheaper to pay than it is to investigate and counter-claim to recover the overstatement. Given $100$ claims in a month, the company wants to know what amount of reserve will give $95$% certainty that the overstatements do not exceed the reserve. (All units are in dollars.) What assumptions are you using?
9. Modify the scripts to vary the upper bounds $a$ and lower bound $b$ (with the other parameters ﬁxed) and observe the diﬀerence of the empirical probability and the theoretical probability.
10. Modify the scripts to vary the probability $p$ (with the other parameters ﬁxed) and observe the diﬀerence of the empirical probability and the theoretical probability. Make a conjecture about the diﬀerence as a function of $p$ (i.e. where is the diﬀerence increasing, decreasing.)
11. Modify the scripts to vary the number of trials $n$ (with the other parameters ﬁxed) and observe the diﬀerence of the empirical probability and the theoretical probability. Test the rate of decrease of the deviation with increasing $n$. Does it follow the predictions of the Berry-Esséen Theorem?

__________________________________________________________________________

References

[1]   William Feller. An Introduction to Probability Theory and Its Applications, Volume I, volume I. John Wiley and Sons, third edition, 1973. QA 273 F3712.

[2]   Emmanuel Lesigne. Heads or Tails: An Introduction to Limit Theorems in Probability, volume 28 of Student Mathematical Library. American Mathematical Society, 2005.

[3]   Sheldon Ross. A First Course in Probability. Macmillan, 1976.

[4]   Stephen Senn. Dicing with Death: Chance, Health and Risk. Cambridge University Press, 2003.

__________________________________________________________________________

1. Virtual Laboratories in Probability and Statistics. Search the page for Normal Approximation to the Binomial Distribution and then run the Binomial Timeline Experiment.
2. Central Limit Theorem explanation. A good visual explanation of the application of the Central Limit Theorem to sampling means.
3. Central Limit Theorem explanation. Another lecture demonstration of the application of the Central Limit Theorem to sampling means.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.

Email to Steve Dunbar, sdunbar1 at unl dot edu

Last modiﬁed: Processed from LATEX source on July 23, 2016