[an error occurred while processing this directive] [an error occurred while processing this directive]
[Note this!]

Key Concepts:

  1. The law of long leads, more properly known as the arcsine law, says that in a coin-tossing games, a surprisingly large number of sample paths leave one player in the lead almost all the time, and in very few cases will the lead change sides and fluctuate in the manner that is expected of a well-behaved coin.
  2. Interpreted geometrically in terms of random walks, the path crosses the x-axis very rarely, and with increasing duration of the walk, the frequency of crossings decreases rapidly, and the lengths of the "waves" on one side of the axis increase in length.
  3. The law of the iterated logarithm tells very precisely how far the random walk will make excursions from the axis.



  1. The limsup, abbreviation for limit superior is a refined and generalized notion of limit, being the largest dependent-variable subsequence limit. That is, among all subsequences of independent-variable values tending to some independent-variable value, possibly infinity, there will be a corresponding dependent-variable subsequence. Some of these dependent-variable sequences will have limits, and among all these, the largest is the limsup.
  2. The liminf, abbreviation for limit inferior is the analogous least of all dependent-variable subsequence limits.
[Big Ideas!]

Mathematical Ideas

The following discussion is taken from W. Feller, in Introduction to Probability Theory and Applications, Volume I, Chapter III, and Chapter VIII. There are generally two classes of theorems about random walks:

  1. Those that tell how well-behaved and natural are the outcomes of typical walks. The Weak Law of Large Numbers, the Strong Law of Large Numbers and the Central Limit Theorem fall into these categories.
  2. Those that tell how strange and unnatural are the outcomes of typical walks. The Arcsine Law and the Law of the Iterated Logarithm are good examples in this category.

The Arcsine Law

For simplicity (and striking,counter-intuitive results) in this section on the arcsine law we are considering the symmetric random walk S_n = X_1 + .. + X_n composed of independent, identically distributed random variables X_i, each of which assumes the value +1 with probability 1/2, and value -1 with probability We shall say that the particle spends the time from k-1 to k on the positive side if at least one of the two values S_{k-1} and S_k is positive (in which case the other is positive or at worst, 0). In this case, geometrically, the broken line path of the random walk lies above the horizontal axis on the interval (k-1,k). In coin-tossing terminology, this means that at both the (k-1)st and kth trial our hapless bettor's accumulated gain was non-negative. For notation, let

u_{2n} = \choose(2n, n) 2^{-2n}
Theorem Let p_{2k,2n} be the probability that int he time interval from 0 to 2n, the particle spends 2k time units on the positive side, and therefore 2n-2k on the negative side. Then
p_{2k,2n} = u_{2k}*u_{2n-2k}

We feel intuitively that the fraction k/n of the total time spent on the positive side is most likely to be 1/2. However, the opposite is true! The possible values of k/n are least probable and the extreme values k/n = 0 an k/n =1 are most probable! The formula of the theorem is exact, but not intuitively revealing. An easy application of Stirling's formula shows that u_{2*n} ~ 1/sqrt(\pi*n) as n \to \infinity. It then easily follows that

p_{2k,2n} ~ 1/( \pi*sqrt( k*(n-k) )

as k \to infinity and (n-k) \to \infinity. The probability that the time spent on the positive side lies between 1/2 and \alpha (1/2 < \alpha < 1) is given by

\sum( p_{2k,2n}, n/2 < k < \alpha n) ~ 1/(\pi*n) * \sum( 1/sqrt( (k/n)*(1 - k/n) ), n/2 < k < \alpha*n)

On the right side we recognize the Riemann sum approximating the integral:

(1/\pi) int( 1/sqrt( x(1-x) ), x = 1/2..\alpha) = (2/\pi)\arcsin(sqrt(\alpha) - 1/2.

For reasons of symmetry, the probability that k/n \le 1/2 tends to 1/2 as n \to \infinity. Adding this to the integral, we get:

Theorem: (The arcsine law). For fixed \alpha (0 < \alpha < 1) and n \to \infinity, the probability that the fraction k/n of time spent on the positive side is less than \alpha tends to:
(1/\pi) \int( (x*(1-x))^{-1/2}, x = 0..a) = (2/\pi)\arcsin(\sqrt{\alpha})

In practice, the formula provides an excellent approximation even for values of n as small as 20. The table in Illustration I below illustrates the paradox clearly. The arcsine law was first established by P. Levy in 1939. Erdos and Kac generalizd the arcsine law to the case of sums of independent random variables in 1947. E. Sparre Anderson simplified the proofs and also generalized the results.

The Law of the Iterated Logarithm

We now drop the fair coin restriction and consider the slightly more general case where the i.i.d.r.v.s of the sum S_n = X_1 + ... + X_n are the binomial random variables X_i = +1 with probability p and and X_i = 0 with probability q = 1-p. Note that now the mean \mu is and the variance is \sigma^2 = p*q Again we consider the shifted and rescaled sum that we considered in the Central Limit Theorem

S^{*}_n = (S_n - n*p)/sqrt(n*p*q)
Theorem: With probability 1 we have:
limsup_{n \to \infinity} S^{*}_n/sqrt(2 log log n) = 1
This means that for \lambda > 1, with probability 1, only finitely many of the events:
S_n > n*p + \lambda*sqrt(2*n*p*q* log log n)
occur, while for \lambda < 1, with probability 1,
S_n > n*p + \lambda*sqrt(2*n*p*q* log log n)
occurs for infinitely many n.

For reasons of symmetry, for lambda > 1, the inequality

S_n < n*p - \lambda*sqrt(2*n*p*q* log log n)

can only occur for finitely many n, while for \lambda < 1

S_n < n*p - \lambda*sqrt(2*n*p*q* log log n)

must occur for infinitely many n. That is

limsinf_{n \to \infinity} S^{*}_n/sqrt(2 log log n) = -1

with probability 1. The law of the iterated logarithm for Bernoulli trials is a special case of a more general theorem first formulated by Kolmogorov in 1929. It is also possible to formulate even stronger theorems!

Illustration 1

2n-20,2k p_{2k,2n} cdf k/n (2/pi)*arcsin(sqrt(alpha))
0 0.1762 0.1762 0.0000 0.0000
2 0.0927 0.2689 0.1000 0.2048
4 0.0736 0.3426 0.2000 0.2952
6 0.0655 0.4080 0.3000 0.3690
8 0.0617 0.4697 0.4000 0.4359
10 0.0606 0.5303 0.5000 0.5000
12 0.0617 0.5920 0.6000 0.5641
14 0.0655 0.6574 0.7000 0.6310
16 0.0736 0.7311 0.8000 0.7048
18 0.0927 0.8238 0.9000 0.7952
20 0.1762 1.0000 1.0000 1.0000
2n=40, 2k
0 0.1254 0.1254 0.0000 0.0000
2 0.0643 0.1897 0.0500 0.1436
4 0.0495 0.2392 0.1000 0.2048
6 0.0424 0.2816 0.1500 0.2531
8 0.0383 0.3199 0.2000 0.2952
10 0.0356 0.3555 0.2500 0.3333

Illustration 2

Illustration of law of iterated logarithm Here are 2000 steps of a random walk with p = q = 1/2. In this figure, the random walk must return again to cross the blue curves (\lambda = 0.9) infinitely many times, but may only cross the red curve finitely many times.

Reading Suggestion:

  1. W. Feller, in Introduction to Probability Theory and Applications, Volume I, J. Wiley and Sons, Chapter III, and Chapter VIII.

Problems to Work for Understanding

Outside Readings/Links:

[an error occurred while processing this directive] [an error occurred while processing this directive]

Last modified: [an error occurred while processing this directive]