Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
Probability Theory and Stochastic Processes
Steven R. Dunbar
Law of the Iterated Logarithm
Mathematicians Only: prolonged scenes of intense rigor.
and furthermore, almost surely, for all , for every larger than a threshold value
and furthermore, almost surely, for all , for every larger than a threshold value
We again consider the number of successes in a coin-tossing game. That is, we consider the sum where the independent, identically distributed random variables in the sum are the Bernoulli random variables with probability and with probability . Note that the mean is and the variance is for each of the summands .
The Strong Law of Large Numbers says that
with probability in the sample space of all possible coin ﬂips. This says the denominator is “too strong”, it “condenses out” all variation in the sum .
The Central Limit Theorem applied to this sequence of coin ﬂips says
where is a normal random variable and the limit is interpreted as convergence in distribution. In fact, this implies that for large about 68% of the points in the sample space of all possible coin ﬂips satisfy
and about 95% of the points in the sample space of all possible coin ﬂips satisfy
This says the denominator is “too weak”, it doesn’t condense out enough information. In fact, using the Kolmogorov zero-one law and the Central Limit Theorem, almost surely
and almost surely
The Strong Law and the Central Limit Theorem together suggest that “somewhere in between and ” we might be able to make stronger statements about convergence and the variation in the sequence .
In fact, Hausdorﬀ’s estimate tells us:
with probability in the sample space of all possible coin ﬂips for all values of . This says the denominator is “still too strong”, it condenses out too much information.
Even better, Hardy and Littlewood’s estimate tells us:
with probability in the sample space of all possible coin ﬂips for all values of . In a way, this says is “still a little too strong”, it condenses out most information.
Khinchin’s Law of the Iterated Logarithm has a denominator that is “just right.” It tells us very precisely how the deviations of the sums from the mean vary with . Using a method due to Erdös, it is possible to reﬁne the law even more, but for these notes a reﬁnement is probably past the point of diminishing returns.
Like the Central Limit Theorem, the Law of the Iterated Logarithm illustrates in an astonishingly precise way that even completely random sequences obey precise mathematical laws.
Khinchin’s Law of the Iterated Logarithm says that:
Almost surely, for all , there exist inﬁnitely many such that
and furthermore, almost surely, for all , for every larger than a threshold value depending on
These appear in a slightly non-standard way, with the additional factor times the standard deviation from the Central Limit Theorem to emphasize the similarity to and the diﬀerence from the Central Limit Theorem.
This means that with probability for any , only ﬁnitely many of the events:
occur; on the other hand, with probability ,
occurs for inﬁnitely many .
For reasons of symmetry, for , the inequality
can only occur for ﬁnitely many ; while
must occur for inﬁnitely many . That is,
with probability .
Compare the Law of the Iterated Logarithm to the Central Limit Theorem. The Central Limit Theorem, says that is approximately distributed as a random variable for large . Therefore, for a large but ﬁxed , there is probability about that the values of can exceed the standard deviation , or . For a ﬁxed but large , with probability about , can exceed twice the standard deviation , or . The Law of the Iterated Logarithm tells us the more precise information that there are inﬁnitely many such that
for any . The Law of the Iterated Logarithm does not tell us how long we will have to wait between such repeated crossings however, and the wait can be very, very long indeed, although it must (with probability ) eventually occur again. Moreover, the Law of the Iterated Logarithm tells us in addition that
must occur for inﬁnitely many .
Khinchin’s Law of the Iterated Logarithm also applies to the cumulative fortune in a coin-tossing game, or equivalently, the position in a random walk. Consider the independent Bernoulli random variables with probability and with probability . The mean is and the variance is for each of the summands . Then consider the sum with mean and variance . Since , then and . Then applying the Law of the Iterated Logarithm says that with probability for any , only ﬁnitely many of the events:
occur; on the other hand, with probability ,
occurs for inﬁnitely many . This means that the fortune must (with probability ) oscillate back and forth across the net zero axis inﬁnitely often, crossing the upper and lower boundaries:
The statement puts some strength behind an understanding of the long-term swings backs and forth in value of a random process. It also implies a form of recurrence, that is, a random walk must visit every integer value.
The Law of the Iterated Logarithm for Bernoulli trials stated here is a special case of an even more general theorem ﬁrst formulated by Kolmogorov in 1929. It is also possible to formulate even stronger and more general theorems! The proof here uses the Large Deviations and Moderate Deviations results with the Borel-Cantelli Lemmas. In another direction, the Law of the Iterated Logarithm can be proved using invariance theorems, so it is distantly related to the Central Limit Theorem.
Figure 1 gives an impression of the growth of the function in the Law of the Iterated Logarithm compared to the square root function. Figure 2 gives an impression of the Law of the Iterated Logarithm by showing a piecewise linearly connected graph of steps of with . In this ﬁgure, the random walk must return again to cross the blue curves with inﬁnitely many times, but may only cross the red curve with ﬁnitely many times. Of course, this is only a schematic impression since a single random walk (possibly atypical, from the negligible set!) on the ﬁnite interval can only suggest the almost sure inﬁnitely many crossings of for any .
Figure 3 gives a comparison of impressions of four of the limit theorems. The individual ﬁgures deliberately are “spaghetti graphs” to give an impression of the ensemble of sample paths. Each ﬁgure shows a diﬀerent scaling of the same sample paths for a sequence of fair coin ﬂips, each path a diﬀerent color. Note that the steps axis has a logarithmic scale, meaning that the shape of the paths is distorted although it still gives an impression of the random sums. The top left ﬁgure shows converging to for all paths in accord with the Strong Law of Large Numbers. The top right ﬁgure plots the scaling . For large values of steps the values over all paths is a distribution ranging from about to , consistent with the Central Limit Theorem. The lower left ﬁgure plots as an illustration of Hausdorﬀ’s Estimate with . It appears that the scaled paths are very slowly converging to , the range for is within . The lower right ﬁgure shows along with lines to suggest the conclusions of the Law of the Iterated Logarithm. It suggests that all paths are usually in the range but with each path making a few excursions outside the range.
The proof resembles the proof of the Strong Law of Large Numbers for independent random variables with mean and uniformly bounded th moments. That proof showed that using the independence and identical distribution assumptions
Then adapting the proof of the Markov and Chebyshev inequalities with fourth moments
Use the Corollary that if converges, then the sequence converges almost surely to . By comparison, converges so that a.s. Using the same set of ideas
provided that . Then using the same lemma for for a simple version of Hausdorﬀ’s Estimate.
Now adapt this proof to higher moments. Let be a ﬁxed positive integer. Recall the deﬁnition where and consider . Expanding the product results in a sum of products of the individual random variables of the form . Each product results from a selection or mapping from indices to the set . Note that if an index is selected only once so that appears only once in the product , then by independence. Further notice that for all sets of indices
where is the number of functions from to that take each value at least twice. Let be the number of partitions of into subsets each containing at least two elements. If is such a partition, then has at most elements. The number of functions that are constant on each element of is at most . Thus, .
Now let and consider
Choose . Then
Recall the Corollary 2 appearing in the section on the Borel-Cantelli Lemma:
Let be a sequence of random variables. If converges, then converges to zero, almost surely.
By this corollary, the sequence of random variables
almost surely as .
This means that for each , there is a negligible event (depending on ) outside of which converges to . Now consider a countable set of values of tending to . Since a countable union of negligible events is negligible, then for each , converges to almost surely. □
Remark. The proof shows that a.s. for .
Proof. The proof uses the Large Deviations Estimate as well as the Borel-Cantelli Lemma. Recall the Large Deviations Estimate says
Note that as , .
and take and note that
since . Thus, the probability is less than or equal to the following:
Hence , and is convergent because of the following inequalities
First establish a two lemmas, and for convenience, let
Proof of Lemma 5. Recall that the Large Deviations Estimate gives
Note that as . Then
for large enough . This means that
Since , the results of the Moderate Deviations Theorem apply to give
for large enough . □
Remark. Lemma 6 is an example of a class of lemmas called maximal inequalities. Here are two more examples of maximal inequalities.
Proof of Lemma 6.
Since the ’s are independent, then for . Chebyshev’s Inequality tells us that
Remark. Note that the second part of the Law of the Iterated Logarithm, follows by symmetry from the ﬁrst part by replacing with and with .
Remark. The proof of the Law of the Iterated Logarithm proceeds in two parts. First it suﬃces to show that
The second part of the proof is to establish that
It will only take a subsequence to prove (2). However this will not be easy because it will use the second Borel-Cantelli Lemma, which requires independence.
Remark. The following is a simpliﬁed proof giving a partial result for integer sequences with exponential growth. This simpliﬁed proof illustrates the basic ideas of the proof. Fix and let . Then
Choose so that . Then
By the ﬁrst Borel-Cantelli lemma,
The full proof of the Law of the Iterated Logarithm takes more work to complete.
Proof of (1) in the Law of the Iterated Logarithm.
Fix and let be a constant chosen later. For , let . The proof consists of showing that
From Lemma 6
where . We do know that
Note that because this is approximately
which is the same as
Then for large enough . Using this inequality in the right hand side of Equation (3), we get
Now, . Choose so that . Then for large enough , we have
Use Lemma 5 with . Then we get
for large. Note that
which is the general term of a convergent series so
Then the ﬁrst Borel-Cantelli Lemma implies that
Then in particular
This in turn implies that almost surely
for and large enough which establishes (1). □
Proof of (2) in the Law of the Iterated Logarithm. Continue with the proof of Equation (2). To prove the second part, it suﬃces to show that there exists so that i.o. almost surely. Let for chosen later with suﬃciently large. The proof will show
Note that . It suﬃces to consider
Choose so that
Then note that we can choose large enough so that
Now considering equation (4), and the inequality above
Now using Lemma 5, with , we get
The series with this as its terms diverges. Thus, we see that Equation (4) has been proven.
Now notice that
which means that
Thus, . Now choose so that . Then for large enough .
Thus, we have
Now use (4) and we see that happens almost surely for large enough .
Now is a sequence of independent random variables, and so the second Borel-Cantelli Lemma says that almost surely
Adding this with Equation (5), we get that
This is enough to show that
which is enough to show the only remaining part of Khinchin’s Law of the Iterated Logarithm. □
This section is adapted from: W. Feller, in Introduction to Probability Theory and Volume I, Chapter III, and Chapter VIII, and also E. Lesigne, Heads or Tails: An Introduction to Limit Theorems in Probability, Chapter 12, American Mathematical Society, Student Mathematical Library, Volume 28, 2005. Some of the ideas in the proof of Hausdorﬀ’s Estimate are adapted from J. Lamperti, Probability: A Survey of the Mathematical Theory, Second Edition, Chapter 8. Figure 3 is a recreation of a ﬁgure in the Wikipedia article on the Law of the Iterated Law.
R script for comparison ﬁgures..
for . Here is the “ﬂoor function”, the greatest integer less than or equal to , so , , , , etc. Find
Does the sequence have a limit?
Solutions to Problems.
I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.
I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.
Information on this website is subject to change without notice.
Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1
Email to Steve Dunbar, sdunbar1 at unl dot edu
Last modiﬁed: Processed from LATEX source on January 12, 2018