Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar

__________________________________________________________________________

Law of the Iterated Logarithm

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

Rating

Rating

Mathematicians Only: prolonged scenes of intense rigor.

_______________________________________________________________________________________________

Section Starter Question

Section Starter Question

__________________________________________________________________________

Key Concepts

Key Concepts

  1. The Law of the Iterated Logarithm tells very precisely how far the number of successes in a coin-tossing game will make excursions from the average value.
  2. The Law of the Iterated Logarithm is a high-point among increasingly precise limit theorems characterizing how far the number of successes in a coin-tossing game will make excursions from the average value. The theorems start with the Strong Law of Large Numbers and the Central Limit Theorem, to Hausdorff’s Estimate, and the Hardy-Littlewood Estimate leading to the Law of the Iterated Logarithm.
  3. Khinchin’s Law of the Iterated Logarithm says: Almost surely, for all 𝜖 > 0, there exist infinitely many n such that
    Sn np > (1 𝜖)n2p(1 p) ln (ln (n))

    and furthermore, almost surely, for all 𝜖 > 0, for every n larger than a threshold value N

    Sn np < (1 + 𝜖)n2p(1 p) ln (ln (n)).

__________________________________________________________________________

Vocabulary

Vocabulary

  1. The limsup, abbreviation for limit superior is a refined and generalized notion of limit, being the largest dependent-variable subsequence limit. That is, among all subsequences of independent-variable values tending to some independent-variable value, usually infinity, there will be a corresponding dependent-variable subsequence. Some of these dependent-variable sequences will have limits, and among all these, the largest is the limsup.
  2. The liminf, abbreviation for limit inferior is analogous, it is the least of all dependent-variable subsequence limits.
  3. Khinchin’s Law of the Iterated Logarithm says: Almost surely, for all 𝜖 > 0 there exist infinitely many n such that
    Sn np > (1 𝜖)n2p(1 p) ln (ln (n))

    and furthermore, almost surely, for all 𝜖 > 0, for every n larger than a threshold value N

    Sn np < (1 + 𝜖)n2p(1 p) ln (ln (n)).

__________________________________________________________________________

Mathematical Ideas

Mathematical Ideas

Overview

We again consider the number of successes in a coin-tossing game. That is, we consider the sum Sn where the independent, identically distributed random variables in the sum Sn = X1 + + Xn are the Bernoulli random variables Xi = +1 with probability p and Xi = 0 with probability q = 1 p. Note that the mean μ = p is and the variance is σ2 = p(1 p) for each of the summands Xi.

The Strong Law of Large Numbers says that

lim nSn np n = 0

with probability 1 in the sample space of all possible coin flips. This says the denominator n is “too strong”, it “condenses out” all variation in the sum Sn.

The Central Limit Theorem applied to this sequence of coin flips says

lim n Sn np np(1 p) = Z

where Z N(0, 1) is a normal random variable and the limit is interpreted as convergence in distribution. In fact, this implies that for large n about 68% of the points in the sample space of all possible coin flips satisfy

Sn np np(1 p) 1

and about 95% of the points in the sample space of all possible coin flips satisfy

Sn np np(1 p) 2.

This says the denominator n is “too weak”, it doesn’t condense out enough information. In fact, using the Kolmogorov zero-one law and the Central Limit Theorem, almost surely

liminf n Sn np np(1 p) =

and almost surely

limsup n Sn np np(1 p) = +.

The Strong Law and the Central Limit Theorem together suggest that “somewhere in between n and n” we might be able to make stronger statements about convergence and the variation in the sequence Sn.

In fact, Hausdorff’s estimate tells us:

lim nSn np n12+𝜖 = 0

with probability 1 in the sample space of all possible coin flips for all values of 𝜖 > 0. This says the denominator n12+𝜖 is “still too strong”, it condenses out too much information.

Even better, Hardy and Littlewood’s estimate tells us:

lim nSn np n ln n constant

with probability 1 in the sample space of all possible coin flips for all values of 𝜖 > 0. In a way, this says n ln n) is “still a little too strong”, it condenses out most information.

Khinchin’s Law of the Iterated Logarithm has a denominator that is “just right.” It tells us very precisely how the deviations of the sums from the mean vary with n. Using a method due to Erdös, it is possible to refine the law even more, but for these notes a refinement is probably past the point of diminishing returns.

Like the Central Limit Theorem, the Law of the Iterated Logarithm illustrates in an astonishingly precise way that even completely random sequences obey precise mathematical laws.

Khinchin’s Law of the Iterated Logarithm says that:

Almost surely, for all 𝜖 > 0, there exist infinitely many n such that

Sn np > (1 𝜖)np(1 p)2 ln (ln (n))

and furthermore, almost surely, for all 𝜖 > 0, for every n larger than a threshold value N depending on 𝜖

Sn np < (1 + 𝜖)np(1 p)2 ln (ln (n)).

These appear in a slightly non-standard way, with the additional factor 2 ln ln n times the standard deviation from the Central Limit Theorem to emphasize the similarity to and the difference from the Central Limit Theorem.

Theorem 1 (Law of the Iterated Logarithm). With probability 1:

limsup n Sn np 2np(1 p) ln (ln (n)) = 1.

This means that with probability 1 for any 𝜖 > 0, only finitely many of the events:

Sn np > (1 + 𝜖)n2p(1 p) ln (ln (n))

occur; on the other hand, with probability 1,

Sn np > (1 𝜖)n2p(1 p) ln (ln (n))

occurs for infinitely many n.

For reasons of symmetry, for 𝜖 > 0, the inequality

Sn np < (1 + 𝜖)n2p(1 p) ln (ln (n))

can only occur for finitely many n; while

Sn np < (1 𝜖)n2p(1 p) ln (ln (n))

must occur for infinitely many n. That is,

liminf n Sn np 2np(1 p) ln (ln (n)) = 1

with probability 1.

Compare the Law of the Iterated Logarithm to the Central Limit Theorem. The Central Limit Theorem, says that (Sn np)np(1 p) is approximately distributed as a N(0, 1) random variable for large n. Therefore, for a large but fixed n, there is probability about 16 that the values of (Sn np)np(1 p) can exceed the standard deviation 1, or Sn np > np(1 p). For a fixed but large n, with probability about 0.025, (Sn np)np(1 p) can exceed twice the standard deviation 2, or (Sn np) > 2np(1 p). The Law of the Iterated Logarithm tells us the more precise information that there are infinitely many n such that

Sn np > (1 𝜖)2np(1 p) ln (ln (n))

for any 𝜖 > 0. The Law of the Iterated Logarithm does not tell us how long we will have to wait between such repeated crossings however, and the wait can be very, very long indeed, although it must (with probability 1) eventually occur again. Moreover, the Law of the Iterated Logarithm tells us in addition that

Sn np < (1 𝜖)2np(1 p) ln (ln (n))

must occur for infinitely many n.

Khinchin’s Law of the Iterated Logarithm also applies to the cumulative fortune in a coin-tossing game, or equivalently, the position in a random walk. Consider the independent Bernoulli random variables Y i = +1 with probability p and Y i = 1 with probability q = 1 p. The mean is μ = 2p 1 and the variance is σ2 = 4p(1 p) for each of the summands Y i. Then consider the sum Tn = Y 1 + + Y n with mean (2p 1)n and variance 4np(1 p). Since Y n = 2Xn 1, then Tn = 2Sn n and Sn = 1 2Tn + n 2 . Then applying the Law of the Iterated Logarithm says that with probability 1 for any 𝜖 > 0, only finitely many of the events:

|Tn n(2p 1)| > (1 + 𝜖)2n2p(1 p) ln (ln (n))

occur; on the other hand, with probability 1,

|Tn n(2p 1)| > (1 𝜖)2n2p(1 p) ln (ln (n))

occurs for infinitely many n. This means that the fortune must (with probability 1) oscillate back and forth across the net zero axis infinitely often, crossing the upper and lower boundaries:

±(1 𝜖)22p(1 p)n ln (ln (n))

The statement puts some strength behind an understanding of the long-term swings backs and forth in value of a random process. It also implies a form of recurrence, that is, a random walk must visit every integer value.

The Law of the Iterated Logarithm for Bernoulli trials stated here is a special case of an even more general theorem first formulated by Kolmogorov in 1929. It is also possible to formulate even stronger and more general theorems! The proof here uses the Large Deviations and Moderate Deviations results with the Borel-Cantelli Lemmas. In another direction, the Law of the Iterated Logarithm can be proved using invariance theorems, so it is distantly related to the Central Limit Theorem.

Figure 1 gives an impression of the growth of the function in the Law of the Iterated Logarithm compared to the square root function. Figure 2 gives an impression of the Law of the Iterated Logarithm by showing a piecewise linearly connected graph of 2000 steps of Sn np with p = q = 12. In this figure, the random walk must return again to cross the blue curves with (1 𝜖) = 0.9 infinitely many times, but may only cross the red curve with 1 + 𝜖 = 1.1 finitely many times. Of course, this is only a schematic impression since a single random walk (possibly atypical, from the negligible set!) on the finite interval 0 n 2000 can only suggest the almost sure infinitely many crossings of (1 𝜖)α(x) for any 𝜖 > 0.


Iterated Logarithm Function

Figure 1: The Iterated Logarithm function α(x) = 2p(1 p)x ln (ln (x)) in green along with functions (1 + 𝜖)α(x) in red and (1 𝜖)α(x) in blue, with 𝜖 = 0.1. For comparison, the square root function 2p(1 p)x is in black.


Impression of the Law of Iterated Logarithm

Figure 2: An impression of the Law of the Iterated Logarithm using a piecewise linearly connected graph of 2000 steps of Sn np with p = q = 12 with the blue curve with (1 𝜖)α(x) and the red curve with (1 + 𝜖)α(x) for 𝜖 = 0.1 and α(x) = 2p(1 p)x ln (ln (x)).

Figure 3 gives a comparison of impressions of four of the limit theorems. The individual figures deliberately are “spaghetti graphs” to give an impression of the ensemble of sample paths. Each figure shows a different scaling of the same 15 sample paths for a sequence of 100,000 fair coin flips, each path a different color. Note that the steps axis has a logarithmic scale, meaning that the shape of the paths is distorted although it still gives an impression of the random sums. The top left figure shows Snn p converging to 0 for all paths in accord with the Strong Law of Large Numbers. The top right figure plots the scaling (Sn np)2p(1 p)n. For large values of steps the values over all paths is a distribution ranging from about 2 to 2, consistent with the Central Limit Theorem. The lower left figure plots (Sn np)n0.6 as an illustration of Hausdorff’s Estimate with 𝜖 = 0.1. It appears that the scaled paths are very slowly converging to 0, the range for n = 100,000 is within [0.5, 0.5]. The lower right figure shows (Sn np)2p(1 p)n ln (ln (x)) along with lines ± 1 to suggest the conclusions of the Law of the Iterated Logarithm. It suggests that all paths are usually in the range [1, 1] but with each path making a few excursions outside the range.


Comparison of four limit theorems


Figure 3: A comparison of four of the limit theorems. The individual figures deliberately are “spaghetti graphs” to give an impression of the ensemble of sample paths. Each figure shows a different scaling of the same 15 sample paths for a sequence of 100,000 fair coin flips, each path a different color. Note that the steps axis has a logarithmic scale, meaning that the shape of the sample paths is distorted.

Hausdorff’s Estimate

Theorem 2 (Hausdorff’s Estimate). Almost surely, for any 𝜖 > 0,

Sn np = o n𝜖+12

as n .

Proof.

The proof resembles the proof of the Strong Law of Large Numbers for independent random variables with mean 0 and uniformly bounded 4th moments. That proof showed that using the independence and identical distribution assumptions

𝔼 Sn4 = n𝔼 X 14 + 3n(n 1)𝔼 X 12 2 nC + 3n2σ4 C 1n2.

Then adapting the proof of the Markov and Chebyshev inequalities with fourth moments

𝔼 |Sn|4 n4 C1n2 n4 .

Use the Corollary that if n=1𝔼 |X n| converges, then the sequence (Xn)n1 converges almost surely to 0. By comparison, n=1𝔼|Sn|4 n4 converges so that Snn 0 a.s. Using the same set of ideas

𝔼 |Sn|4 nα C1n2 n4α

provided that α > 34. Then using the same lemma Snnα 0 for α > 34 for a simple version of Hausdorff’s Estimate.

Now adapt this proof to higher moments. Let k be a fixed positive integer. Recall the definition Rn(ω) = Sn(ω) np = k=1n(X k p) = k=1n(X k where Xk = X k p and consider 𝔼 Rn2k. Expanding the product Rn2k results in a sum of products of the individual random variables Xi of the form Xi1X i2X i2k. Each product Xi1X i2X i2k results from a selection or mapping from indices {1, 2,, 2k} to the set {1,,n}. Note that if an index j {1, 2,,n} is selected only once so that Xj appears only once in the product Xi1X i2X ik, then 𝔼 Xi1X i2X ik = 0 by independence. Further notice that for all sets of indices

𝔼 Xi1X i2X ik 1.

Thus

𝔼 Rn2k = 1i1,,i2kn𝔼 Xi1X i2X i2k N(k,n),

where N(k,n) is the number of functions from {1,, 2k} to {1,,n} that take each value at least twice. Let M(k) be the number of partitions of {1,, 2k} into subsets each containing at least two elements. If P is such a partition, then P has at most k elements. The number of functions N(k,n) that are constant on each element of P is at most nk. Thus, N(k,n) nkM(k).

Now let 𝜖 > 0 and consider

𝔼 n𝜖12R n 2k n2k𝜖kN(k,n) n2k𝜖M(k).

Choose k > 1 2𝜖. Then

n1𝔼 n𝜖12R n 2k < .

Recall the Corollary 2 appearing in the section on the Borel-Cantelli Lemma:

Let (Xn)n0 be a sequence of random variables. If n=1𝔼 |X n| converges, then Xn converges to zero, almost surely.

By this corollary, the sequence of random variables

n𝜖12R n 0

almost surely as n .

This means that for each 𝜖 > 0, there is a negligible event (depending on 𝜖) outside of which n𝜖12R n converges to 0. Now consider a countable set of values of 𝜖 tending to 0. Since a countable union of negligible events is negligible, then for each 𝜖 > 0, n𝜖12R n converges to 0 almost surely. □

Hardy-Littlewood Estimate

Theorem 3 (Hardy-Littlewood Estimate).

Sn np = O n ln n a.s. for n .

Remark. The proof shows that Sn np n ln n a.s. for n .

Proof. The proof uses the Large Deviations Estimate as well as the Borel-Cantelli Lemma. Recall the Large Deviations Estimate says

Sn n p + 𝜖 enh+(𝜖),

where

h+(𝜖) = (p + 𝜖) ln p + 𝜖 p + (1 p 𝜖) ln 1 p 𝜖 1 p .

Note that as 𝜖 0, h+(𝜖) = 𝜖2 2p(1p) + O(𝜖3).

Note that

Sn n p + 𝜖 = Sn np n𝜖,

and take 𝜖 = ln n n and note that

Sn np n ln n enh+ln n n .

Then

h+ ln n n = ln n 2p(1 p)n + o 1 n,

since O ln n n 32 = o 1 n. Thus, the probability is less than or equal to the following:

exp nh+ ln n n = exp 1 2p(1 p) ln n + o 1 = exp ln n 2p(1 p) exp o 1 = n 1 2p(1p) exp o 1 .

Hence exp nh+ ln n n n 1 2p(1p) , and n1n 1 2p(1p) is convergent because of the following inequalities

p(1 p) 1 4 2p(1 p) 1 2 1 2p(1 p) 2 1 2p(1 p) 2 n 1 2p(1p) n2.

Thus,

n1 Sn np > n ln n < ,

and so

Sn np n ln n i.o. = 1.

Proof of Khinchin’s Law of Iterated Logarithm

Theorem 4 (Khinchin’s Law of Iterated Logarithm). Almost surely,

limsup n Sn np 2p(1 p)n ln ln n = 1,

and

liminf n Sn np 2p(1 p)n ln ln n = 1.

First establish a two lemmas, and for convenience, let

α(n) = 2p(1 p)n ln ln n .

Lemma 5. For all positive a and δ and large enough n,

ln na2(1+δ) < S n np > aα(n) < ln na2(1δ).

Proof of Lemma 5. Recall that the Large Deviations Estimate gives

Rn aα(n) = Sn np aα(n) = Sn n p aα(n) n exp nh+ aα(n)a n .

Note that α(n) n 0 as n . Then

h+ aα(n) n = a2 2p(1 p) α(n) n 2 + O α(n) n 3 ,

and so

nh+ aα(n) n = a2 ln ln n + O α(n)3 n2 a2(1 δ) ln ln n

for large enough n. This means that

Sn n p aα(n) exp a2 1 δ ln ln n = ln na2(1δ).

Since ln ln n = o n16, the results of the Moderate Deviations Theorem apply to give

Sn n p aα(n) n = Sn n p p(1 p) n a2 ln ln n 1 2πa2 ln ln n exp a2 ln ln n = 1 2aπ ln ln n ln na2 .

Since ln ln n = o ln na2δ,

Sn n p aα(n) n ln na2(1+δ)

for large enough n. □

Lemma 6 (12.5, Kolmogorov Maximal Inequality). Suppose (Y n)n1 are independent random variables. Suppose further that 𝔼 Y n = 0 and Var(Y n) = σ2. Define Tn := Y 1 + + Y n. Then

max 1knTk b 4 3 Tn b 2σn.

Remark. Lemma 6 is an example of a class of lemmas called maximal inequalities. Here are two more examples of maximal inequalities.

Lemma 7 (Karlin and Taylor, page 280). Let (Y n)n1 be identical independently distributed random variables with 𝔼 Y n = 0 and Var(Y n) = σ2 < . Define Tn = k=1nY k. Then

𝜖2 max 0kn Tk > 𝜖 nσ2.

Lemma 8 (Karlin and Taylor, page 280). Let (Xn)n0 be a submartingale for which Xn 0. For λ > 0,

λ max 0knXk > λ 𝔼 Xn .

Proof of Lemma 6.

Since the Y k’s are independent, then Var(Tn Tk) = (n k)σ2 for 1 k n. Chebyshev’s Inequality tells us that

Tn Tk 2σn 1 Var(Tn Tk) 4σ2n = 1 n k 4n 3 4.

Note that

max 0knTk b = k=1n T 1 < b,,Tk1 < b, and Tk b k=1n T 1 < b,,Tk1 < b, and Tk b 4 3 Tn Tk 2σn = 4 3 k=1n T 1 < b,,Tk1 < b,and Tk b and  Tn Tk 2σn 4 3 k=1n T 1 < b,,Tk1 < b, and Tk b and Tn b 2σn 4 3 Tn b 2σn.

Remark. Note that the second part of the Law of the Iterated Logarithm, liminf nSnnp α(n) = 1 follows by symmetry from the first part by replacing p with (1 p) and Sn with n Sn.

Remark. The proof of the Law of the Iterated Logarithm proceeds in two parts. First it suffices to show that

limsup nSn np α(n) < 1 + η for η > 0, a.s.. (1)

The second part of the proof is to establish that

limsup nSn np α(n) > 1 η for η > 0, a.s. (2)

It will only take a subsequence to prove (2). However this will not be easy because it will use the second Borel-Cantelli Lemma, which requires independence.

Remark. The following is a simplified proof giving a partial result for integer sequences with exponential growth. This simplified proof illustrates the basic ideas of the proof. Fix γ > 1 and let nk := γk. Then

Snk pnk (1 + η)α(nk) < ln nk (1+η)2(1δ) = O k(1+η)2(1δ) .

Choose δ so that (1 + η)2(1 δ) < 1. Then

k1 Snk nkp (1 + η)α(nk) < .

By the first Borel-Cantelli lemma,

Snk nkp (1 + η)α(nk) i.o. = 0,

and so

limsup kSnk nkp α(nk) (1 + η) = 1,

or

Snk nkp α(nk) (1 + η) a.s.

The full proof of the Law of the Iterated Logarithm takes more work to complete.

Proof of (1) in the Law of the Iterated Logarithm.

Fix η > 0 and let γ > 1 be a constant chosen later. For k , let nk = γk. The proof consists of showing that

k1 max nnk+1(Sn np) (1 + η)α(nk) < .

From Lemma 6

k1 max nnk+1(Sn np) (1 + η)α(nk) 4 3 Rnk+1 (1 + η)α(nk) 2nk+1 p(1 p) ,

where Rn = Sn np. We do know that

max nnk+1(Sn np) (1 + η)α(nk) 4 3 Rnk+1 (1 + η)α(nk) 2nk+1 p(1 p) .(3)

Note that nk+1 = o α(nk) because this is approximately

γk+1 compared to c1γk ln (ln γγ ) = c1γk2ln (ln γ) + ln (k),

which is the same as

γ12 compared to c 1c2 + ln (k).

Then 2nk+1 p(1 p) < 1 2ηα(nk) for large enough k. Using this inequality in the right hand side of Equation (3), we get

max nnk+1Sn np (1 + η)α(nk) 4 3 Snk+1 nk+1p (1 + η2)α(nk) .

Now, α(nk+1) γα(nk). Choose γ so that 1 + η2 > (1 + η4)γ. Then for large enough k, we have

(1 + η2)α(nk) > (1 + η4)α(nk+1).

Now

max nnk+1Sn np (1 + η)α(nk) 4 3 Snk+1 nk+1p (1 + η4)α(nk+1) .

Use Lemma 5 with a = (1 δ)1 = (1 + η4). Then we get

max nnk+1Sn np (1 + η)α(nk) 4 3 ln nk+1 (1+η4)

for k large. Note that

ln nk+1 (1+η4) ln γ(1+η4)k(1+η4),

which is the general term of a convergent series so

k1 max nnk+1Rn (1 + η)α(nk) < .

Then the first Borel-Cantelli Lemma implies that

max nnk+1Rn (1 + η)α(nk) i.o. with probability 0.

or equivalently

max nnk+1Sn np < (1 + η)α(nk) a.s. for large enough k

Then in particular

max nkn<nk+1Sn np < (1 + η)α(nk) a.s. for large enough k

This in turn implies that almost surely

Sn np < (1 + η)α(n).

for n > nk and large enough k which establishes (1). □

Proof of (2) in the Law of the Iterated Logarithm. Continue with the proof of Equation (2). To prove the second part, it suffices to show that there exists nk so that Rnk (1 η)α(nk) i.o. almost surely. Let nk = γk for γ chosen later with γ sufficiently large. The proof will show

n1 Rγn Rγn1 1 η 2 α(γn) = , (4)

and also

Rγn1 η 2 α(γn) a.s. for large enough n. (5)

Note that Rγn Rγn1 = dist.Rγnγn1. It suffices to consider

Rγnγn1 1 η 2 α(γn) .

Note that

α(γn γn1) α(γn) = c γn γn1 ln ln γn γn1 cγn ln ln γn = 1 1 γ ln n ln γ + ln 1 1 γ ln n ln γ 1 1 γ.

Choose γ so that

1 η 2 1 η 4 < 1 1 γ.

Then note that we can choose n large enough so that

1 η 2 1 η 4 < α(γn γn1) α(γn)

or

1 η 2 α(γn) < 1 η 4 α(γn γn1).

Now considering equation (4), and the inequality above

Rγn Rγn1 1 η 2 α(γn) R γnγn1 1 η 4 α(γn γn1) .

Now using Lemma 5, with a = 1 + δ1 = 1 η 4, we get

Rγn Rγn1 1 η 2 α(γn) ln γn γn1 1η 4 = n ln γ + ln 1 1 γ1η 4 .

The series with this as its terms diverges. Thus, we see that Equation (4) has been proven.

Now notice that

α(γn) = cγn ln ln γn = cγn ln n + ln ln γ

and so

α(γn1) = cγn1 ln (n 1) + ln ln γ ,

which means that

γα(γn1) = cγn ln (n 1) + ln ln γ .

Thus, α(γn) γα(γn1). Now choose γ so that ηγ > 4. Then ηα(γn) ηγα(γn1) > 4α(γn1) for large enough n.

Thus, we have

Rγn1 η 2 α(γn) R γn1 2α(γn1)

since

Rγn1 η 2 α(γn) R γn1 2α(γn1) .

Now use (4) and we see that Rγn1 < 2α(γn1) happens almost surely for large enough n.

Now Rγn Rγn1 is a sequence of independent random variables, and so the second Borel-Cantelli Lemma says that almost surely

Rγn Rγn1 > 1 η 2 α(γn) i.o.

Adding this with Equation (5), we get that

Rγn > 1 ηα(γn) i.o.

This is enough to show that

liminf nSn np α(n) > 1 η a.s.,

which is enough to show the only remaining part of Khinchin’s Law of the Iterated Logarithm. □

Sources

This section is adapted from: W. Feller, in Introduction to Probability Theory and Volume I, Chapter III, and Chapter VIII, and also E. Lesigne, Heads or Tails: An Introduction to Limit Theorems in Probability, Chapter 12, American Mathematical Society, Student Mathematical Library, Volume 28, 2005. Some of the ideas in the proof of Hausdorff’s Estimate are adapted from J. Lamperti, Probability: A Survey of the Mathematical Theory, Second Edition, Chapter 8. Figure 3 is a recreation of a figure in the Wikipedia article on the Law of the Iterated Law.

_______________________________________________________________________________________________

Algorithms, Scripts, Simulations

Algorithms, Scripts, Simulations

Algorithm

Scripts

R script for comparison figures..

1p <- 0.5 
2k <- 15 
3n <- 100000 
4coinFlips <- array(runif(n*k) <= p, dim=c(n,k)) 
5S <- apply(coinFlips, 2, cumsum) 
6steps <- c(1:n) 
7 
8steps2 <- steps[2:n] 
9S2 <- S[2:n, ] 
10steps3 <- steps[3:n] 
11S3 <- S[3:n, ] 
12 
13ones <- cbind( matrix(1,n-2,1), matrix(-1,n-2,1)) 
14 
15par( mfrow = c(2,2)) 
16 
17matplot((S-steps*p)/steps, 
18        log="x", type="l", lty = 1, ylab="", main="Strong Law") 
19matplot((S-steps*p)/sqrt(2*p*(1-p)*steps), 
20        log="x", type="l", lty = 1, ylab="", main="Central Limit Theorem") 
21matplot((S-steps*p)/(steps^(0.6)), 
22        log="x", type="l", lty = 1, ylab="", main="Hausdorffs Estimate") 
23## matplot((S2-steps2*p)/sqrt(steps2*log(steps2)), log="x", xlim=c(1,n), type="l", lty = 1) 
24matplot(steps3, (S3-steps3*p)/sqrt(2*p*(1-p)*steps3*log(log(steps3))), 
25        log="x", xlim=c(1,n), type="l", lty = 1, ylab="", main="Law of Iterated Logarithm") 
26matlines(steps3, ones, type="l", col="black")
3cAp1x1-1300027:

_______________________________________________________________________________________________

Problems to Work

Problems to Work for Understanding

  1. The “multiplier of the variance 2p(1 p)n” function ln (ln (n)) grows very slowly. To understand how very slowly, calculate a table with n = 10j and 2 ln (ln (n)) for j = 10, 20, 30,, 100. (Remember, in mathematical work above calculus, ln(x) is the natural logarithm, base e, often written ln(x) in calculus and below to distinguish it from the “common” or base-10 logarithm. Be careful, some software and technology cannot directly calculate with magnitudes this large.)
  2. Consider the sequence
    an = (1)n2 + (1)n n

    for n = 1, 2, 3. Here x is the “floor function”, the greatest integer less than or equal to x, so 1 = 1, 32 = 1, 83 = 2, 32 = 2, etc. Find

    limsup nan

    and

    liminf nan.

    Does the sequence an have a limit?

  3. Show that the second part of the Law of the Iterated Logarithm, liminf nSnnp α(n) = 1 follows by symmetry from the first part by replacing p with (1 p) and Sn with n Sn.

Solutions to Problems.

_______________________________________________________________________________________________

Books

Reading Suggestion:

References

[1]   William Feller. An Introduction to Probability Theory and Its Applications, Volume I, volume I. John Wiley and Sons, third edition, 1973. QA 273 F3712.

[2]   John W. Lamperti. Probability: A Survey of the Mathematical Theory. Wiley Series in Probability and Statistics. Wiley, second edition edition, 1996.

[3]   Emmanuel Lesigne. Heads or Tails: An Introduction to Limit Theorems in Probability, volume 28 of Student Mathematical Library. American Mathematical Society, 2005.

__________________________________________________________________________

Links

Outside Readings and Links:

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable effort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reflects the thoughts, interests and opinions of its author. They do not explicitly represent official positions or policies of my employer.

Information on this website is subject to change without notice.

Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1

Email to Steve Dunbar, sdunbar1 at unl dot edu

Last modified: Processed from LATEX source on March 22, 2018