Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466
Stochastic Processes and
Advanced Mathematical Finance
__________________________________________________________________________
Laws of Large Numbers
_______________________________________________________________________
Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.
_______________________________________________________________________________________________
Mathematically Mature: may contain mathematics beyond calculus with proofs.
_______________________________________________________________________________________________
Consider a fair ($p=1\u22152=q$) coin tossing game carried out for 1000 tosses. Explain in a sentence what the “law of averages” says about the outcomes of this game.
_______________________________________________________________________________________________
__________________________________________________________________________
In words, the proportion of those samples whose sample mean diﬀers signiﬁcantly from the population mean diminishes to zero as the sample size increases.
In words, the Strong Law of Large Numbers “almost every” sample mean approaches the population mean as the sample size increases.
__________________________________________________________________________
Lemma 1 (Markov’s Inequality). If $X$ is a random variable that takes only nonnegative values, then for any $a>0$:
Proof. Here is a proof for the case where $X$ is a continuous random variable with probability density $f$:
$$\begin{array}{llll}\hfill \mathbb{E}\left[X\right]& ={\int}_{0}^{\infty}xf\left(x\right)\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={\int}_{0}^{a}xf\left(x\right)\phantom{\rule{0.3em}{0ex}}dx+{\int}_{a}^{\infty}xf\left(x\right)\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \ge {\int}_{a}^{\infty}xf\left(x\right)\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \ge {\int}_{a}^{\infty}af\left(x\right)\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =a{\int}_{a}^{\infty}f\left(x\right)\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =a\mathbb{P}\left[X\ge a\right].\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$$(The proof for the case where $X$ is a purely discrete random variable is similar with summations replacing integrals. The proof for the general case is exactly as given with $dF\left(x\right)$ replacing $f\left(x\right)\phantom{\rule{3.26288pt}{0ex}}dx$ and interpreting the integrals as Riemann-Stieltjes integrals.) □
Lemma 2 (Chebyshev’s Inequality). If $X$ is a random variable with ﬁnite mean $\mu $ and variance ${\sigma}^{2}$, then for any value $k>0$:
Proof. Since ${\left(X-\mu \right)}^{2}$ is a nonnegative random variable, we can apply Markov’s inequality (with $a={k}^{2}$) to obtain
But since ${\left(X-\mu \right)}^{2}\ge {k}^{2}$ if and only if $|X-\mu |\ge k$, the inequality above is equivalent to:
and the proof is complete. □
Theorem 3 (Weak Law of Large Numbers). Let ${X}_{1},{X}_{2},{X}_{3},\dots ,$ be independent, identically distributed random variables each with mean $\mu $ and variance ${\sigma}^{2}$. Let ${S}_{n}={X}_{1}+\cdots +{X}_{n}$. Then ${S}_{n}\u2215n$ converges in probability to $\mu $. That is:
Proof. Since the mean of a sum of random variables is the sum of the means, and scalars factor out of expectations:
Since the variance of a sum of independent random variables is the sum of the variances, and scalars factor out of variances as squares:
Fix a value $\mathit{\epsilon}>0$. Then using elementary deﬁnitions for probability measure and Chebyshev’s Inequality:
Then by the squeeze theorem for limits
□
Jacob Bernoulli originally proved the Weak Law of Large Numbers in 1713 for the special case when the ${X}_{i}$ are binomial random variables. Bernoulli had to create an ingenious proof to establish the result, since Chebyshev’s inequality was not known at the time. The theorem then became known as Bernoulli’s Theorem. Simeon Poisson proved a generalization of Bernoulli’s binomial Weak Law and ﬁrst called it the Law of Large Numbers. In 1929 the Russian mathematician Aleksandr Khinchin proved the general form of the Weak Law of Large Numbers presented here. Many other versions of the Weak Law are known, with hypotheses that do not require such stringent requirements as being identically distributed, and having ﬁnite variance.
Theorem 4 (Strong Law of Large Numbers). Let ${X}_{1},{X}_{2},{X}_{3},\dots ,$ be independent, identically distributed random variables each with mean $\mu $ and variance $\mathbb{E}\left[{X}_{j}^{2}\right]<\infty $. Let ${S}_{n}={X}_{1}+\cdots +{X}_{n}$. Then ${S}_{n}\u2215n$ converges with probability $1$ to $\mu $,
The proof of this theorem is beautiful and deep, but would take us too far aﬁeld to prove it. The Russian mathematician Andrey Kolmogorov proved the Strong Law in the generality stated here, culminating a long series of investigations through the ﬁrst half of the 20th century.
In probability theory a theorem that tells us how a sequence of probabilities converges is called a weak law. For coin tossing, the sequence of probabilities is the sequence of binomial probabilities associated with the ﬁrst $n$ tosses. The Weak Law of Large Numbers says that if we take $n$ large enough, then the binomial probability of the mean over the ﬁrst $n$ tosses diﬀering “much” from the theoretical mean should be small. This is what is usually popularly referred to as the law of averages. However, this is a limit statement and the Weak law of Large Numbers above does not indicate the rate of convergence, nor the dependence of the rate of convergence on the diﬀerence $\mathit{\epsilon}$. Note furthermore that the Weak Law of Large Numbers in no way justiﬁes the false notion called the “Gambler’s Fallacy”, namely that a long string of successive Heads indicates a Tail “is due to occur soon”. The independence of the random variables completely eliminates that sort of prescience.
A strong law tells how the sequence of random variables as a sample path behaves in the limit. That is, among the inﬁnitely many sequences (or paths) of coin tosses we select one “at random” and then evaluate the sequence of means along that path. The Strong Law of Large Numbers says that with probability $1$ that sequence of means along that path will converge to the theoretical mean. The formulation of the notion of probability on an inﬁnite (in fact an uncountably inﬁnite) sample space requires mathematics beyond the scope of the course, partially accounting for the lack of a proof for the Strong Law here.
Note carefully the diﬀerence between the Weak Law of Large Numbers and the Strong Law. We do not simply move the limit inside the probability. These two results express diﬀerent limits. The Weak Law is a statement that the group of ﬁnite-length experiments whose sample mean is close to the population mean approaches all of the possible experiments as the length increases. The Strong Law is an experiment-by-experiment statement, it says (almost every) sequence has a sample mean that approaches the population mean. This is reﬂected in the subtle diﬀerence in notation here. In the Weak Law the probabilities are written with a subscript: ${\mathbb{P}}_{n}\left[\cdot \right]$ indicating this is a binomial probability distribution with parameter $n$ (and $p$). In the Strong Law, the probability is written without a subscript, indicating this is a probability measure on a sample space. Weak laws are usually much easier to prove than strong laws.
This section is adapted from Chapter 8, “Limit Theorems”, A First Course in Probability, by Sheldon Ross, Macmillan, 1976.
The experiment is ﬂipping a coin $n$ times, and repeat the experiment $k$ times. Then compute the proportion for which the sample mean deviates from $p$ by more than $\mathit{\epsilon}$.
R script for the Law of Large Numbers.
Perl PDL script for the Law of Large Numbers.
Scientiﬁc Python script for the Law of Large Numbers.
__________________________________________________________________________
. Next ﬁnd the exact probability that $\mathbb{P}\left[{X}_{1}+\cdots +{X}_{10}>15\right]$ using that the fact that the sum of independent Poisson random variables with parameters ${\lambda}_{1}$, ${\lambda}_{2}$ is again Poisson with parameter ${\lambda}_{1}+{\lambda}_{2}$.
__________________________________________________________________________
[1] Emmanuel Lesigne. Heads or Tails: An Introduction to Limit Theorems in Probability, volume 28 of Student Mathematical Library. American Mathematical Society, 2005.
[2] Sheldon Ross. A First Course in Probability. Macmillan, 1976.
[3] Sheldon M. Ross. Introduction to Probability Models. Elsevier, 8th edition, 2003.
__________________________________________________________________________
__________________________________________________________________________
I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.
I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.
Information on this website is subject to change without notice.
Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1
Email to Steve Dunbar, sdunbar1 at unl dot edu
Last modiﬁed: Processed from LATEX source on July 21, 2016