Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar

__________________________________________________________________________

The de Moivre-Laplace Central Limit Theorem

_______________________________________________________________________

Note: To read these pages properly, you will need the latest version of the Mozilla Firefox browser, with the STIX fonts installed. In a few sections, you will also need the latest Java plug-in, and JavaScript must be enabled. If you use a browser other than Firefox, you should be able to access the pages and run the applets. However, mathematical expressions will probably not display correctly. Firefox is currently the only browser that supports all of the open standards.

_______________________________________________________________________________________________

Rating

Rating

Mathematicians Only: prolonged scenes of intense rigor.

_______________________________________________________________________________________________

Section Starter Question

Section Starter Question

What is the most important probability distribution? Why do you choose that distribution as most important?

_______________________________________________________________________________________________

Key Concepts

Key Concepts

  1. The statement, proof and meaning of the de Moivre-Laplace Central Limit Theorem.

__________________________________________________________________________

Vocabulary

Vocabulary

  1. The standard Gaussian density is
    ϕ(x) = 1 2πex2 2 .

  2. The complementary cumulative distribution function Φ(y) of the Gaussian density is Φc(y) = 1 2πyex22 dx. Φc(y) measures the area under the upper tail of the Gaussian density.
  3. The complementary error function erfc x is defined by erfc(x) = 2 πxex2 dx.
  4. The de Moivre-Laplace Central Limit Theorem is the statement that for a,b {±} with a < b, then
    lim nn a Sn np np(1 p) b = 1 2πabex2 2 dx

__________________________________________________________________________

Mathematical Ideas

Mathematical Ideas

History of the de Moivre-Laplace Central Limit Theorem

The first statement of what we now call the de Moivre-Laplace Central Limit Theorem occurs in The Doctrine of Chances by Abraham de Moivre in 1738. He proved the result for p = 12. This finding was far ahead of its time, and was nearly forgotten until the famous French mathematician Pierre-Simon Laplace rediscovered it. Laplace generalized the theorem to p12 in Théorie Analytique des Probabilités published in 1812. Gauss also contributed to the statement and proof of the general form of the theorem.

Laplace also discovered the more general form of the Central Limit Theorem but his proof was not rigorous. As with De Moivre, Laplace’s finding received little attention in his own time. It was not until the end of nineteenth century that the generality of the central limit theorem was realized. The Russian mathematician Aleksandr Liapunov gave the first rigorous proof of the general Central Limit Theorem in 1901-1902. As a result a general version of the Central Limit Theorem is occasionally referred to as Liapunov’s theorem. A theorem with weaker hypotheses but with equally strong conclusion is Lindeberg’s Theorem of 1922. It says that the sequence of random variables need not be identically distributed, instead the random variables only need zero means with individual variances small compared to their sum.

George Pólya first used the name “Central Limit Theorem” (in German: “zentraler Grenzwertsatz”) in 1920 in the title of a paper, [4].

Interpretation of the de Moivre-Laplace Central Limit Theorem

Theorem 1 (de Moivre-Laplace Central Limit Theorem). Let a,b {±} with a < b. Then

lim nn a Sn np np(1 p) b = 1 2πabex2 2 dx

and the convergence is uniform in a and b.

If y , n and 0 < p < 1 define

k(y) = np + ynp(1 p).

Then

lim n j=0k(y)n j pj(1 p)nj = 1 2πnp+ynp(1p)e(xnp)2 2np(1p) dx.

This is illustrated in Figure 1


PIC

Figure 1: Comparison of the binomial distribution with n = 12, p = 410 with the normal distribution with mean np and variance np(1 p).

Heuristic Proof of the de Moivre-Laplace Central Limit Theorem

We are interested in the natural random variation of Sn around its mean. From the Weak Law of Large Numbers, we know that n |Sn n p| > ϵ 0. From the Large Deviations result we also know that n |Sn n p| > ϵ enh+(ϵ) + enh(ϵ). Equivalently, we can say that Sn will fall outside the range np(1 ± ϵ) with probability near 0. Finally, note that E (Sn np)2 = np(1 p). We ask, “How large a fluctuation or deviation of Sn from np should be surprising?”. We want a function ψ(n) with

lim nn Sn np > ψ(n) = α,  for 0 < α < 1. (1)

To measure the surprise of a fluctuation, we specify α, then ask what is the order of ψ(n) as a function of n? Small but fixed values of α would indicate large surprise, i.e. unlikely deviations, and so we expect ψ(n) to grow but more slowly than ϵn.

Take p = 12 to simplify the calculations for the discovery oriented proof in this subsection. We can make some useful guesses about ψ(n). Interpret the probability on the left in as the area in the histogram for the binomial distribution of Sn. From the expression of Wallis’ Formula for the central term in the binomial distribution, the maximum height of the histogram bars is of the order 1 nπ, see Wallis’ Formula.. That means that to get a fixed area α around that central term requires an interval of width at least a multiple of n. If we take ψ(n) = xnn2 (with the factor 12 put in to make variances cancel nicely), then we are looking for a sequence xn which will make

lim nn Sn n2 > xnn2 = α as n

true for 0 < α < 1. By Chebyshev’s Inequality, we can estimate this probability as

n Snn 12 > xn(2n) 1xn2.

If limsup nxn = , we could only obtain α = 0, so xn is bounded above. If liminf nxn = 0 then for a fixed ϵ > 0 and some subsequence nm such that for sufficiently large m

nm Snmnm 12 > ϵ > xnm(2nm) 0.

which is also contradiction to the assumption α > 0. Hence xn is bounded below by a positive value. We guess that xn = x so ψ(n) = xn2 for all values of n.

Breiman’s Proof of the de Moivre-Laplace Central Limit Theorem

To simplify the calculations, take the number of trials to be even and p = 12. Then the expression we want to evaluate and estimate is

2n S2n n < x2n2 .

This is evaluated as

|kn|<xn222n2n k = |j|<xn222n 2n n + j.

Let Pn = 22n2n n be the central binomial term and then write each binomial probability in terms of this central probability Pn, specifically

22n 2n n + j = Pn n(n 1)(n j + 1) (n + j)(n + 1) .

Name the fractional factor above as Dj,n and rewrite it as

Dj,n = 1 (1 + jn)(1 + j(n 1))(1 + j(n j + 1))

and then

log(Dj,n) = k=0j1 log(1 + j(n k)).

Now use the common two-term asymptotic expansion for the logarithm function log(1 + x) = x(1 + ϵ1(x)). Note that

ϵ1(x) = log(1 + x) x 1 = k=2n(1)k+1xk k

so x2 < ϵ1(x) < 0 and lim x0ϵ1(x) = 0.

log(Dj,n) = k=0j1 j n k 1 + ϵ1 j n k.

Let

ϵ1,j,n k=0j1 j n k = k=0j1 j n kϵ1 j n k.

Then we can write

log(Dj,n) = (1 + ϵ1,j,n) k=0j1 j n k.

Note that j is restricted to the range |j| < xn2 so

j n k < xn2 n xn2 = x 2n x

and then

ϵ1,j,n = max |j|<xn2ϵ1 j n k 0  as n .

Write

j n k = j n 1 1 kn

and then expand 1 1kn = 1 + ϵ2(kn) where ϵ2(x) = 1(1 x) 1 = k=1xk so ϵ2(x) 0 as x 0. Once again k is restricted to the range |k||j| < xn2 so

k n < xn2 n = x 2n

so that

ϵ2,j,n = max |k|<xn2ϵ2 k n 0  as n .

Then we can write

log(Dj,n) = (1 + ϵ1,j,n)(1 + ϵ2,j,n) k=0j1 j n.

Simplify this to

log(Dj,n) = (1 + ϵ3,j,n) k=0j1 j n = (1 + ϵ3,j,n)j2 n

where ϵ3,j,n = ϵ1,j,n + ϵ2,j,n + ϵ1,j,n ϵ2,j,n. Therefore ϵ3,j,n 0 as n uniformly in j. Exponentiating

Dj,n = ej2n(1 + Δ j,n)

where Δj,n 0 as n 0 uniformly in j.

Using Stirling’s Formula,

Pn = 22n (2n)! n! n! = 1 nπ(1 + δn).

Summarizing

2n S2n n < x2n2 = |j|<xn222n 2n n + j = |j|<xn2Pn Dj,n = |j|<xn2Pn ej2n(1 + Δ j,n) = (1 + δn) |j|<xn2 1 2π ej2n 2 n

Make the change of variables tj = j2n, Δt = tj+1 tj = 2n so the summation is over the range x < tj < x. Then

2n S2n n < x2n2 = (1 + δn) x<tj<x 1 2πetj2 2Δt.

The factor on the right is the approximating sum for the integral of the standard normal density over the interval [x,x]. Therefore,

lim n2n S2n n < x2n2 = 1 2πxxet22 dt.

Formal Proof of the de Moivre-Laplace Theorem

Theorem 2 (de Moivre-Laplace Central Limit Theorem). Let a,b {±} with a < b. Then

lim nn a Sn np np(1 p) b = 1 2πabex2 2 dx

and the convergence is uniform in a and b.

For the proof of the de Moivre-Laplace Central Limit Theorem, we need several lemmas.

Lemma 3.

1 2πex2 2 dx = 1.

See the proofs in Gaussian Density..

Definition. Define the complement of the cumulative distribution function (ccdf) as Φc(y) = 1 2πyex22 dx. The ccdf Φc(y) measures the area under the upper tail of the standard Gaussian density while Φ(y) is the cumulative density function of the standard Gaussian density.

Definition. Some mathematicians and engineers use alternative accumulation functions for a scaled Gaussian density called the error function erf(x) and the complementary error function erfc x. Define erfc(y) := 2 πyex2 dx and erf(y) := 1 erfc(y). Note that Φc(y) = 1 2 erfc(y2).

Lemma 4.

1 2π(y + 1)(ey22 e(y+1)22) Φc(y) 1 2πyey22.

Proof. For the lower bound on Φc(y):

1 2πyet2 2 dt 1 2πyy+1et2 2 dt = 1 2πyy+1tet2 2 t dt 1 2πyy+1 tet2 2 y + 1 dt = 1 2π(y + 1) et2 2 yy+1 = 1 2π(y + 1)(ey22 e(y+1)22).

For the upper bound on Φc(y):

1 2πyet2 2 dt = 1 2πytet2 2 t dt 1 2πytet2 2 y dt = 1 2πy et2 2 y = 1 2πyey22.

Lemma 5. Φc(y) 1 2πyey22 as y .

Proof. Using Lemma 4

y (y + 1)(1 e(y+1)22 ey22 ) Φc(y) (12πy)ey22 1.

Since

e(y+1)22 ey22 = e2y1 2

then

Φc(y) 1 2πyey22.

Alternative Proof.

Φc(y) = 1 2πyex22 dx = 1 2πyxex22 x dx  Integration by parts gives = 1 2π ey22 y yex22 x2 dx = 1 2π ey22 y yxex22 x3 dx  Again integrating by parts = 1 2π ey22 y ey22 y3 +y3ex22 x4 dx.

Continuing in this way, we obtain the asymptotic series

Φcy = ey22 2πy 1 1 y2 + 3 y4 15 y6 + 105 y8 .

Lemma 6. If

  1. {fn} is a sequence of monotone functions fn : [0, 1]
  2. {fn} converges pointwise to f
  3. f : with f() (0, 1)
  4. f is continuous.

Then this convergence is uniform.

Proof.

  1. Let ϵ > 0 be given.
  2. Without loss of generality, fn is monotone increasing and lim xfn(x) = 0 and lim xfn(x) = 1. This means that lim xf(x) = 0 and lim xf(x) = 1.
  3. There exists x so that for all z x, |f(z) 1| < ϵ. Let N1 be so that n N1 means that |fn(x) f(x)| < ϵ and so |fn(x) 1| < 2ϵ. Since fn is monotone for all z x, |fn(z) 1||fn(x) 1| < 2ϵ. Thus, for all z x and n N1,
    |fn(z) f(z)||fn(z) 1| + |f(z) 1| < 3ϵ.

  4. Similarly, there exists y and N2 so that for all z y and all n N2
    |fn(z) f(z)||fn(z)| + |f(z)| < 3ϵ.

  5. Since [y,x] is compact, f(t) is uniformly continuous on [y,x]. There exists δ > 0 for continuity on [y,x]. Choose R so large that (x y)R < δ. Define aj = y + j((x y)R). and j = 1,,R partition [y,x] into subintervals [aj1,aj] each of diameter less than δ.
  6. There exists mj so that for all n mj, |fn(aj1) f(aj1)| < ϵ. There exists mj ̂ so that for all n mj ̂, |fn(aj) fj(aj)| < ϵ.
  7. Choose n Mj := max{mj,mj ̂} and pick z [aj1,aj]. Note that
    fn(aj1) fn(z) fn(aj) implies f(aj1) ϵ fn(z) f(aj) + ϵ.

    Since |f(aj) f(aj1)| < ϵ implies f(aj) 2ϵ fn(z) fn(aj) + ϵ. Thus, we have

    |f(z) fn(z)| |f(z) f(aj)| + |fn(z) f(aj)| < ϵ + 2ϵ = 3ϵ.
  8. Choose N := max{N1,N2,M1,,MR}. Then for n > N and for all z, |f(z) fn(z)| 3ϵ.

Lemma 7. The convergence in the de Moivre-Laplace Central Limit Theorem is uniform in both a and b.

Proof. Let fn(b) = n Snnp np(1p) b and f(b) = 1 2πbex2 2 dx. Then fn is monotone increasing and fn : [0, 1]. By the de Moivre-Laplace Central Limit Theorem to be proved below, {fn} converges pointwise to f. Clearly by Lemma 3, f : with f() = [0, 1]. Finally f(b) is continuous. Therefore, by Lemma 6. □

We need the following statement of Stirling’s Formula:

Lemma 8 (Stirling’s Formula). For each n > 0, set

n! = 2πnn+12en(1 + ϵ n).

There exists a real constant A so that |ϵn| < A n .

Proof. From Theorem 1 in Stirling’s Formula from the Sum of Average Differences. we know that

n! = 2πnn+12en(1 + ϵ n).

There exists a real constant A so that |ϵn| < A n .

From Theorem 13 in Stirling’s Formula Derived from the Gamma Function. we know that for n 2,

n! 2πnn+12en 1 1 12n 1 288n2 + 1 9940n3.

Therefore, we can take

ϵn < 1 n( 1 12 + 1 288n + 1 9940n2)

and we can take A can be taken as an upper bound on 1 12 + 1 288n + 1 9940n2.

From the last conclusion in . we know that

2πnn+12en < n! < 2πnn+12en+1(12n).

From Corollary 1 in Stirling’s Formula by Euler-Maclaurin Summation. we know that

2πnn+12en < n! < 2πnn+12en+1(12(n12)).

Any of these error estimates on Stirling’s Formula is sufficient to establish the conclusion of the Lemma. □

Definition. Let (sn,k)n>0,kIn and (tn)n>0 for tn > 0 be two sets of real numbers. Then we say sn,k = Ou(tn) if |sn,k| ctn for all k In. Here, Ou means big-O uniformly.

Lemma 9 (de Moivre-Laplace Binomial Point Mass Limit).

n kpk(1 p)nk = 1 2π np(1 p)e(knp)2 2np(1p) 1 + δn(k)

where for a > 0,

lim n max |knp|<an|δn(k)| = 0.

Proof.

  1. Set In := {k : np an < k < np + an}. Then the max in the statement of the lemma is taken over In.
  2. Using Stirling’s Formula n kpk(1 p)nk = n! k!(n k)!pk(1 p)nk = 1 2π n k(n k) np k k n(1 p) (n k) nk 1 + ϵn (1 + ϵk)(1 + ϵnk) .

  3. For k In, we have n (np + an) n(1 p) + an n k(n k) n (np an) n(1 p) an.(2)

    This inequality is established in an exercise at the end of the section.

  4. Then 1 np(1 p) 1 + a p 1 n1 1 + a 1 p 1 n1 n k(n k) 1 np(1 p) 1 a p 1 n1 1 a 1 p 1 n1

    so

    1 np(1 p) 1 a p 1 n + O 1 n 1 a 1 p 1 n + O 1 n n k(n k) 1 np(1 p) 1 + a p 1 n + O 1 n 1 + a 1 p 1 n + O 1 n.

    Thus,

    1 np(1 p) 1 c n + O 1 n n k(n k) 1 np(1 p) 1 + c n + O 1 n.
  5. Summarizing
    n k(n k) = 1 np(1 p) 1 + Ou 1 n,

    because 1 + h = 1 + 1 2h + .

  6. This establishes the square root factor in the de Moivre-Laplace Binomial Point Mass Limit. Next is the approximation of the pk(1 p)nk factors as an exponential.
  7. Recall the series expansions and inequalities
    1. ln(1 + t) = t t2 2 + O t3,
    2. For k In,
      an np an k np k an np + an.

    3. For k In, knp k = Ou n12,
    4. For k In, knp nk = Ou n12,

    These expansions are established in the exercises at the end of the section.

  8. For k In ln n kpk n n k(1 p) nk = k ln 1 k np k + (n k) ln 1 + k np n k = 1 2(k np)2 1 k + 1 n k + kOu(n32) + (n k)O u(n32) = 1 2(k np)2 1 np(1 p) + Ou(n12).

  9. Therefore, n kpk n n k(1 p) nk = exp (k np)2 2np(1 p) 1 + Ou(n12)

  10. Recall that for k In
    1. ϵn < An,
    2. ϵk < Ak,
    3. ϵnk < A(n k),
    4. 1 k = Ou 1 n,

    5. 1 n k = Ou 1 n

    so that

    1 + ϵn (1 + ϵk)(1 + ϵnk) = 1 + Ou 1 n

  11. Now combining parts (a), (b) and (c) above, we get

    n kpk(1 p)nk = 1 2πp(1 p)n exp (k np)2 2np(1 p) 1 + Ou(n12) .

Lemma 10. Let [a,b] be an interval of and let f be a function defined on that is zero outside of [a,b] and continuous on [a,b]. Then for any t

lim h0,h0h k=f(t + kh) =abf(x) dx

Remark. This lemma is a generalization and extension of the definition of Riemann integration.

Proof. The result follows from the uniform continuity of the function f on the interval [a,b]. Let ϵ > 0 be given, and then choose h small enough that |f(x) f(y)| < ϵ whenever x,y [a,b] and |x y| < h. Let

{k |a t + kh b} = {i,i + 1,i + 2,,j}

and M = sup axb|f(x)|. Then with the Triangle Inequality we have that

h k:t+kh[a,b]f(t + kh) abf(x) dx hM + k=ij1 hf(t + kh) t+kht+(k+1)hf(x) dx + 2hM 3hM + (j 1)hϵ 3hM + (b a)ϵ.

The term hM at the beginning comes from the term hf(t + jh) which does not occur in the sum. The term 2hM at the end comes from the two leftover integral portions at+ihf(x) dx and t+jhbf(x) dx. □

Now we can complete the proof of the de Moivre-Laplace Central Limit Theorem.

Proof. Completion of Proof of the Central Limit Theorem The proof shows that a sum of binomial point masses can be expressed in the form of Lemma 10

Case 1, a and b are finite real numbers

Let Kn be the interval

[anp(1 p),bnp(1 p)]

  1. We start with n Sn np Kn = k=0n1 Kn(k np) n Sn = k = 1 2π np(1 p) k=0n 1 Kn(k np) exp (k np)2 2np(1 p) (1 + δn(k))

    where lim n max k|δn(k)| = 0 by the de Moivre-Laplace Binomial Point Mass Limit.

  2. Then n Sn np Kn = 1 + δn 2π np(1 p) k=0n1 Kn(k np) exp (k np)2 2np(1 p)

    where lim nδn = 0.

  3. When n is large enough, the expression in the previous paragraph as a sum k=0n becomes a sum k so that 1 2π 1 + δn np(1 p) × k 1[a,b] k np(1 p) np 1 p × exp 1 2 k np(1 p) np1 p2 (3)

  4. Expanding over the numerator of the fraction 1+δn np(1p) there is a term 1 2π 1 np(1 p) × k 1[a,b] k np(1 p) np 1 p × exp 1 2 k np(1 p) np1 p2

    which converges to a finite value as shown in the next paragraph. Then the term

    1 2π δn np(1 p) × k 1[a,b] k np(1 p) np 1 p × exp 1 2 k np(1 p) np 1 p2

    converges to 0 and can be dropped.

  5. Set h = 1np(1 p) and f(x) = 1 2πex22. Then the expression in the equation (3) has the form of the limit in Lemma 10. Therefore, the expression in the equation (3) approaches
    1 2πabex22 dx

    proving the de Moivre-Laplace Central Limit Theorem for finite values of a and b.

Case 2, a = and b is a finite real number The proof must show that

n Sn np bnp(1 p) 1 2πbex22 dx.

  1. Let b and ϵ > 0. Fix c > max(0,b) so that
    1 2πcex22 dx < ϵ.

  2. Then
    1 2πcex22 dx < ϵ

    and

    1 2πccex22 dx > 1 2ϵ.

  3. Write n Sn np bnp(1 p) 1 2πbex22 dx An + Bn + C

    where

    An = n Sn np cnp(1 p) Bn = n c Sn np np(1 p) b 1 2πcbex22 dx C = 1 2πcex22 dx
  4. We have that
    0 An 1 n c Sn np np(1 p) c

  5. As in Part 1, lim nn c Sn np np(1 p) c = 1 2πccex22 dx > 1 2ϵ

  6. This shows that An 2ϵ for large enough n. Similarly, lim nBn = 0 and C < ϵ. This finishes Case 2.

Case 3, a is a finite real number and b = This case is similar to Case 2. □

Practical Applications

Weak Law

Corollary 1. The Weak Law of Large Numbers is a direct consequence of the Central Limit Theorem. That is, we get directly that

lim nn Sn n p ϵ0.

Actually a stronger statement is possible:

Corollary 2. Let un be a sequence such that lim nun n. Then =

lim nn un Sn n p ϵ0.

Rules for Validity of the Approximation

Rules for deciding when to use this approximation: (according to Feller, volume I)

np(1 p) > 18.

Application

Example. Tony Gwyn’s batting average in 1995 was 197 hits out of 535, (about .368). His lifetime average was .338. The question is whether Tony Gwyn was a “lucky” .300 hitter in 1995? We assume yes and that hits are independently distributed random variables. We want to know n Sn 197 = n S535535.3 535(.3)(.7) 197160.5 112 Φc(3.44) .00115.

That is, the probability of this many hits occurring “by chance” if Gwynn actually was a .300 hitter are small, about 1%, so at least under the stringent assumptions of the approximation, Gwynn actually improved in 1995.

Another question is was whether his actual “ability” was .338. Here, p = .338 and n Sn 197 = Φc(1.48) .0694. This is at least a believably large probability, so we admit that it may be possible.

Sources

This section is adapted from: Heads or Tails, by Emmanuel Lesigne, Student Mathematical Library Volume 28, American Mathematical Society, Providence, 2005, Chapter 7, [3]. See also the proofs in [1] and [2].

_______________________________________________________________________________________________

Problems to Work

Problems to Work for Understanding

  1. For equation (2) show that for k In, we have
    n (np + an) n(1 p) + an n k(n k) n (np an) n(1 p) an

  2. Show that ln(1 + t) = t t2 2 + O t3.
  3. Show that knp k = Ou n12.
  4. Show that knp nk = Ou n12.
  5. Show that
    an np an k np k an np + an.

  6. Write out in detail Case 3 in the completion of the proof of the de Moivre-Laplace Central Limit Theorem.

__________________________________________________________________________

Books

Reading Suggestion:

References

[1]   Leo Breiman. Probability. SIAM, 1992.

[2]   William Feller. An Introduction to Probability Theory and Its Applications, Volume I, volume I. John Wiley and Sons, third edition, 1973. QA 273 F3712.

[3]   Emmanuel Lesigne. Heads or Tails: An Introduction to Limit Theorems in Probability, volume 28 of Student Mathematical Library. American Mathematical Society, 2005.

[4]   George Pólya. Über den zentralen grenzwertsatz der wahrscheinlichkeitsrechnung und das momentenproblem. Mathematische Zeitschrift, 8:171–181, 1920.

__________________________________________________________________________

Links

Outside Readings and Links:

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable effort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reflects the thoughts, interests and opinions of its author. They do not explicitly represent official positions or policies of my employer.

Information on this website is subject to change without notice.

Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1

Email to Steve Dunbar, sdunbar1 at unl dot edu

Last modified: Processed from LATEX source on December 9, 2011