Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar

__________________________________________________________________________

An Analytic Model for Coin Tossing

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________ ### Rating

Mathematicians Only: prolonged scenes of intense rigor.

_______________________________________________________________________________________________ ### Section Starter Question

What are the axioms for a probability space? What are the assumptions about coin-ﬂips or successive Bernoulli trials that make the process a probability space?

_______________________________________________________________________________________________ ### Key Concepts

1. The sequence of products $\begin{array}{llll}\hfill & cos\left(x∕2\right),\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & cos\left(x∕2\right)\cdot cos\left(x∕4\right),\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & cos\left(x∕2\right)\cdot cos\left(x∕4\right)\cdot cos\left(x∕8\right),\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \dots \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

converges uniformly on bounded intervals to

$\prod _{k=1}^{\infty }cos\left(\frac{x}{{2}^{k}}\right)=\frac{sinx}{x}$

2. An unusual formula involving $\pi$, due to Vieta, is
$\frac{2}{\pi }=\frac{\sqrt{2}}{2}\cdot \frac{\sqrt{2+\sqrt{2}}}{2}\cdot \frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2}\cdots$

3. Given $t$ with $0\le t\le 1$ with unique binary expansion,
$t=\frac{{𝜖}_{1}\left(t\right)}{2}+\frac{{𝜖}_{2}\left(t\right)}{{2}^{2}}+\frac{{𝜖}_{3}\left(t\right)}{{2}^{3}}+\dots$

the Rademacher functions are ${r}_{k}\left(t\right)=1-2{𝜖}_{k}\left(t\right)$.

$\underset{0}{\overset{1}{\int }}\prod _{k=1}^{\infty }exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt=\prod _{k=1}^{\infty }\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt$

5. The correspondence of $H$ with $+1$, $T$ with $-1$, the $k$th toss with ${r}_{k}\left(t\right)$, an event with the set of $t$’s in $\left(0,1\right)$, and the probability of an event with the measure of the corresponding set of $t$’s gives an analytic model for successive Bernoulli trials.
6. Almost every number $t$ has asymptotically the same number of $0$’s and $1$’s in it binary expansion. That is, we say that the binary expansion is normal.

__________________________________________________________________________ ### Vocabulary

1. The function $sinx∕x$ is often called the sinc function. The sinc function $sinc\left(x\right)$, also called the “sampling function”, arises frequently in signal processing and the theory of Fourier transforms.
2. Every $t$ with $0\le t\le 1$ has a unique binary expansion,
$t=\frac{{𝜖}_{1}\left(t\right)}{2}+\frac{{𝜖}_{2}\left(t\right)}{{2}^{2}}+\frac{{𝜖}_{3}\left(t\right)}{{2}^{3}}+\dots$

3. The Rademacher functions are ${r}_{k}\left(t\right)=1-2{𝜖}_{k}\left(t\right)$,
4. If $t$ has asymptotically the same number of $0$s and $1$s in its binary expansion, the binary expansion is normal.

__________________________________________________________________________ ### Mathematical Ideas

#### Vieta’s Formula from Trigonometry

The following is due both to Vieta in 1593 and also to Euler.

Theorem 1. The sequence of products

$\begin{array}{llll}\hfill & cos\left(x∕2\right),\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & cos\left(x∕2\right)\cdot cos\left(x∕4\right),\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & cos\left(x∕2\right)\cdot cos\left(x∕4\right)\cdot cos\left(x∕8\right),\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \dots \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

converges uniformly on bounded intervals to

$\prod _{k=1}^{\infty }cos\left(\frac{x}{{2}^{k}}\right)=\frac{sinx}{x}.$

Remark. The function $sinx∕x$ is often called the sinc function. The cosine product deﬁnition shows that $sinc\left(0\right)=1$ while $sinx∕x$ is undeﬁned at $x=0$. However, $\underset{x\to 0}{lim}sin\left(x\right)∕x=1$ so $sinc\left(x\right)$ is continuous at $0$. In the sequel the implicit assumption is that $sinx∕x$ is $1$ at $x=0$ even though technically the expression is undeﬁned. The sinc function $sinc\left(x\right)$, also called the “sampling function”, is a function that arises frequently in signal processing and the theory of Fourier transforms. The full name of the function is “sine cardinal”, but it is commonly referred to by its abbreviation, “sinc”.

Remark. The following proof is remarkably elementary for such an unusual identity. The proof combines repeated application of the half-angle identity for the cosine with an elementary limit of the sine function. The proof of uniform convergence uses the Cauchy criterion. The veriﬁcation of the criterion uses an elementary bound on the cosine function from the series expansion.

Proof.

1. Start with repeated application of the half-angle formula from trigonometry: $\begin{array}{llll}\hfill sin\left(x\right)& =2sin\left(\frac{x}{2}\right)cos\left(\frac{x}{2}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={2}^{2}sin\left(\frac{x}{4}\right)cos\left(\frac{x}{4}\right)cos\left(\frac{x}{2}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={2}^{3}sin\left(\frac{x}{8}\right)cos\left(\frac{x}{8}\right)cos\left(\frac{x}{4}\right)cos\left(\frac{x}{2}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ⋮\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={2}^{n}sin\left(\frac{x}{{2}^{n}}\right)\prod _{k=1}^{n}cos\left(\frac{x}{{2}^{k}}\right).\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
2. A standard limit from elementary calculus shows that for $x\ne 0$
$1=\underset{n\to \infty }{lim}\frac{sin\left(x∕{2}^{n}\right)}{x∕{2}^{n}}=\frac{1}{x}\underset{n\to \infty }{lim}{2}^{n}sin\left(\frac{x}{{2}^{n}}\right),$

so

$x=\underset{n\to \infty }{lim}{2}^{n}sin\left(\frac{x}{{2}^{n}}\right).$

3. Alternatively, use the series expansion for $sinx$ $\begin{array}{llll}\hfill {2}^{n}sin\frac{x}{{2}^{n}}& ={2}^{n}\sum _{k=1}^{\infty }\frac{{x}^{2k-1}∕{2}^{n\left(2k-1\right)}}{\left(2k-1\right)!}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =x+\sum _{k=2}^{\infty }\frac{{x}^{2k-1}∕{2}^{n\left(2k-1\right)}}{\left(2k-1\right)!}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

and then let $n\to \infty$. Details are left to the reader.

4. Putting the limits in step 1 and step 2 together
$\frac{sinx}{x}=\prod _{k=1}^{\infty }cos\left(\frac{x}{{2}^{k}}\right).$

5. For the uniform convergence, consider ﬁrst the bounded interval $\left[-\pi ,\pi \right]$. Then let ${f}_{n}\left(x\right)={\prod }_{k=1}^{n}cos\left(\frac{x}{{2}^{k}}\right)$ and note that ${f}_{n}\left(x\right)\ge 0$ on $\left[-\pi ,\pi \right]$ with ${f}_{n}\left(-\pi \right)={f}_{n}\left(\pi \right)=0$. See Figure 1.
6. Each ${f}_{n}\left(x\right)$ is continuous, the sequence converges to the continuous function $sin\left(x\right)∕x$ on the closed bounded (compact) interval $\left[-\pi ,\pi \right]$, and ${f}_{n}\left(x\right)\ge {f}_{n+1}\left(x\right)$ on $\left[-\pi ,\pi \right]$ since $cos\left(x∕{2}^{n+1}\right)\le 1$. By Theorem 7.13 of , the convergence is uniform for the monotone limit of continuous functions to a continuous function on a compact interval.
7. Note that ${f}_{n}\left(-k\pi \right)={f}_{n}\left(-\left(k-1\right)\pi \right)={f}_{n}\left(\left(k-1\right)\pi \right)={f}_{n}\left(k\pi \right)=0$ for $n\ge k$. Furthermore ${\left(-1\right)}^{k}{f}_{n}\left(x\right)\ge 0$ for $n\ge k$. See Figure 1 for an example with $k=1$ and $n=2$.
8. Then the previous step can be repeated on intervals of the form $\left[k\pi ,\left(k+1\right)\pi \right]$ or $\left[-\left(k+1\right)\pi ,-k\pi \right]$ to establish uniform convergence on those intervals. By symmetry, the threshold $K\left(k,𝜖\right)$ for uniform bound $𝜖$ on $\left[k\pi ,\left(k+1\right)\pi \right]$ and $\left[-\left(k+1\right)\pi ,-k\pi \right]$ will be the same.
9. For any bounded interval with $|x|\le M$, choose $k$ so large that $M\le k\pi$ and use the previous steps to ﬁnd a uniform bound for uniform convergence on the ﬁnite collection of intervals $\left[-\left(j+1\right)\pi ,-j\pi \right]\cup \left[j\pi ,\left(j+1\right)\pi \right]$ for $k=1,\dots ,k$ together with $\left[-\pi ,\pi \right]$. Figure 1: Convergence of the products to $sinx∕x$. The red curve is $cos\left(x∕2\right)$ and the blue curve is $cos\left(x∕2\right)\cdot cos\left(x∕4\right)$.

A special case is the basis for an unusual formula involving $\pi$ due to Vieta in a book published in 1593. First, a simple trigonometric lemma.

Lemma 2.

Proof.

1. The proof is by mathematical induction.
2. The base case for $n=1$ is $cos\left(\frac{\pi }{4}\right)=\frac{\sqrt{2}}{2}$, verifying the base case.
3. Given that the equality holds for $n=k$, let

so that the assumption is that

$cos\left(\frac{\pi }{{2}^{k+1}}\right)=\frac{\alpha }{2}.$

4. Then for $n=k+1$,

establishing the induction.

Corollary 1.

$\frac{2}{\pi }=\frac{\sqrt{2}}{2}\cdot \frac{\sqrt{2+\sqrt{2}}}{2}\cdot \frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2}\cdots$

Proof. Set $x=\pi ∕2$ and apply the lemma to the inﬁnite product formula for $sinx∕x$ from the theorem. □

A numerical evaluation of successive ﬁnite products is in Table 1, illustrating the convergence.

Table 1: Numerical example of Vieta’s formula.
 $\frac{2}{\pi }$ 1 2 3 4 5 $0.6366198$ $0.7071068$ $0.6532815$ $0.6407289$ $0.6376436$ $0.6368755$

#### Binary Expansions and Rademacher Functions

Every $t$ with $0\le t<1$ has a unique binary expansion,

 $t=\frac{{𝜖}_{1}\left(t\right)}{2}+\frac{{𝜖}_{2}\left(t\right)}{{2}^{2}}+\frac{{𝜖}_{3}\left(t\right)}{{2}^{3}}+\dots$ (1)

using the convention that terminating expansions have all digits equal to $0$ from a certain point. For example, write

$\frac{3}{4}=\frac{1}{2}+\frac{1}{{2}^{2}}+\frac{0}{{2}^{3}}+\frac{0}{{2}^{4}}+\dots$

rather than

$\frac{3}{4}=\frac{1}{2}+\frac{0}{{2}^{2}}+\frac{1}{{2}^{3}}+\frac{1}{{2}^{4}}+\dots .$

With the convention about terminating expansion, the graphs of ${𝜖}_{1}\left(t\right)$, ${𝜖}_{2}\left(t\right)$, ${𝜖}_{3}\left(t\right)$, …are as in Figure 2. Figure 2: The bit functions for binary expansions in $\left[0,1\right)$.

It is more convenient to use the Rademacher functions deﬁned by ${r}_{k}\left(t\right)=1-2{𝜖}_{k}\left(t\right)$, with graphs as in Figure 2. Note the similarity to the random variables ${X}_{k}=0,1$ and ${Y}_{k}=1-2{X}_{k}=±1$. In terms of the Rademacher functions, the binary expansion (1) becomes

$1-2t=\sum _{k=1}^{\infty }\frac{r\left(t\right)}{{2}^{k}}.$

Remark. Note that an alternative description of the Rademacher functions is

${r}_{k}\left(t\right)=sgn\left(sin\left(2\pi {2}^{k-1}t\right)\right).$

In this sense, the Rademacher functions are a discretized version of the sine functions. The Rademacher function expansion follows directly from binary expansion and not from an orthogonal basis.

Theorem 3.

$\underset{0}{\overset{1}{\int }}\prod _{k=1}^{\infty }exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt=\prod _{k=1}^{\infty }\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt$

Remark. This is remarkable and unusual, since it says an integral of a product is a product of integrals. The proof combines the expansion of $1-2t$ in Rademacher functions, elementary integration, the expression of $sinx$ and $cosx$ in terms of complex exponentials, and Vieta’s formula.

Remark. On the other hand, from an advanced probability point of view, it may not be so remarkable after all. Consider ${r}_{k}\left(t\right)$ as a random variable over the probability space $\left[0,1\right)$. Although the left side is a product of exponentials, it could be written as the exponential of a sum $\sum _{k=1}^{\infty }{r}_{k}\left(t\right)$ of random variables. The left side is then the characteristic function (or re-scaled Fourier transform) of the sum of random variables. By a well-known correspondence, the characteristic function of a sum is equal to the product of individual characteristic functions. Characteristic functions transform questions about sums of random variables and convergence of random variables into analytic questions of products and pointwise convergence of functions. In this way, this theorem points the way to the analytic model of coin-ﬂipping below.

Proof.

1. Let $u=1-2t$, so $\phantom{\rule{0.3em}{0ex}}du=-2\phantom{\rule{0.3em}{0ex}}dt$ and $\begin{array}{llll}\hfill \underset{0}{\overset{1}{\int }}{\mathrm{e}}^{\mathrm{i}x\left(1-2t\right)}\phantom{\rule{0.3em}{0ex}}dx& =\underset{-1}{\overset{1}{\int }}{\mathrm{e}}^{\mathrm{i}xu}\frac{\phantom{\rule{0.3em}{0ex}}du}{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{2}\frac{{\mathrm{e}}^{\mathrm{i}x}-{\mathrm{e}}^{-\mathrm{i}x}}{\mathrm{i}x}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{sinx}{x}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
2. On the other side $\begin{array}{llll}\hfill \underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt& ={2}^{k-1}\cdot \frac{{\mathrm{e}}^{\mathrm{i}x∕{2}^{k}}}{{2}^{k}}+{2}^{k-1}\cdot \frac{{\mathrm{e}}^{-\mathrm{i}x∕{2}^{k}}}{{2}^{k}}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =cos\left(\frac{x}{{2}^{k}}\right).\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
3. Then Vieta’s formula
$\frac{sinx}{x}=\prod _{k=1}^{\infty }cos\left(\frac{x}{{2}^{k}}\right).$

becomes

$\underset{0}{\overset{1}{\int }}{\mathrm{e}}^{\mathrm{i}x\left(1-2t\right)}\phantom{\rule{0.3em}{0ex}}dt=\prod _{k=1}^{\infty }\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt$

4. Then substituting
$1-2t=\sum _{k=1}^{\infty }\frac{r\left(t\right)}{{2}^{k}}.$

and expressing the sum in the exponent as a product of exponentials

$\underset{0}{\overset{1}{\int }}\prod _{k=1}^{\infty }exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt=\prod _{k=1}^{\infty }\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt.$

#### Another proof of Vieta’s formula

Consider the function $\sum _{k=1}^{n}{c}_{k}{r}_{k}\left(t\right)$. It is a step function that is constant on the intervals $\left(\frac{j}{{2}^{n}},\frac{j+1}{{2}^{n}}\right)$ for $j=0,1,2,\dots ,{2}^{n}-1$. The values of the function are $±{c}_{1}±{c}_{2}±{c}_{3}±\cdots ±{c}_{n}$. Every sequence of length $n$ of $+1$s and $-1$s corresponds to exactly one interval $\left(\frac{j}{{2}^{n}},\frac{j+1}{{2}^{n}}\right)$. There are ${2}^{n}$ such intervals and the sequence of $±1$ corresponds to the sequence of subintervals of length ${2}^{-j}$ with $j\le n$ that $\left(\frac{j}{{2}^{n}},\frac{j+1}{{2}^{n}}\right)$ is in. Then

 $\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}\sum _{k=1}^{n}{c}_{k}{r}_{k}\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt=\frac{1}{{2}^{n}}\sum _{{2}^{n}}exp\left(\mathrm{i}\sum _{1}^{n}±{c}_{k}\right),$ (2)

and the lower limit on the summation indicates that the summation over each of the ${2}^{n}$ possible subsequences of $±1$. By turning a sum of powers into a product

 $\frac{1}{{2}^{n}}\sum exp\left(\mathrm{i}\sum _{k=1}^{n}±{c}_{k}\right)=\prod _{k=1}^{n}\left(\frac{{\mathrm{e}}^{\mathrm{i}{c}_{k}}+{\mathrm{e}}^{-\mathrm{i}{c}_{k}}}{2}\right)=\prod _{k=1}^{n}cos\left({c}_{k}\right).$ (3)

Putting together (2) and (3)

 $\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}\sum _{k=1}^{n}{c}_{k}{r}_{k}\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt=\prod _{k=1}^{n}cos\left({c}_{k}\right)=\prod _{k=1}^{n}\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}{c}_{k}{r}_{k}\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt.$ (4)

where the proof of the last equality is the same as in step 2 of the proof of Theorem 3.

Now set ${c}_{k}=x∕{2}^{k}$ to get

$\underset{0}{\overset{1}{\int }}exp\left(\sum _{k=1}^{n}\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt=\prod _{k=0}^{n}cos\left(\frac{x}{{2}^{k}}\right).$

Since

$\underset{n\to \infty }{lim}\sum _{k=1}^{\infty }\frac{{r}_{k}\left(t\right)}{{2}^{k}}$

converges uniformly in $\left(0,1\right)$ to $1-2t$ by the Weierstrass M-criterion. Therefore we can interchange limit and integral to see that

$\begin{array}{c}\frac{sin\left(x\right)}{x}=\underset{0}{\overset{1}{\int }}{\mathrm{e}}^{\mathrm{i}x\left(1-2t\right)}\phantom{\rule{0.3em}{0ex}}dt=\underset{n\to \infty }{lim}\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}x\sum _{k=1}^{n}\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt\\ =\underset{n\to \infty }{lim}\prod _{k=1}^{n}cos\left(\frac{x}{{2}^{k}}\right)=\prod _{k=1}^{\infty }cos\left(\frac{x}{{2}^{k}}\right)\end{array}$

This is a diﬀerent proof of Vieta’a formula connecting the formula to binary representations of real numbers in $\left[0,1\right)$.

Let $\mu \left[E\right]$ be the length, or more formally, the measure of the subset $E$ of $\left[0,1\right)$. If ${\delta }_{1},{\delta }_{2},\dots ,{\delta }_{n}$ is a sequence of $+1$’s and $-1$’s then

$\mu \left[{r}_{1}={\delta }_{1},{r}_{2}={\delta }_{2},\dots ,{r}_{n}={\delta }_{n}\right]=\mu \left[{r}_{1}={\delta }_{1}\right]\cdot \mu \left[{r}_{2}={\delta }_{2}\right]\cdots \phantom{\rule{0.3em}{0ex}},\mu \left[{r}_{n}={\delta }_{n}\right].$

Example. Let ${\delta }_{1}=+1$, ${\delta }_{2}=-1$, ${\delta }_{3}=-1$. Then $\left\{t\phantom{\rule{0.3em}{0ex}}:\phantom{\rule{0.3em}{0ex}}{\delta }_{1}=+1\right\}=\left[0,1∕2\right)$, $\left\{t\phantom{\rule{0.3em}{0ex}}:\phantom{\rule{0.3em}{0ex}}{\delta }_{2}=-1\right\}=\left[1∕4,1∕2\right)\cup \left[3∕4,1\right)$, $\left\{t\phantom{\rule{0.3em}{0ex}}:\phantom{\rule{0.3em}{0ex}}{\delta }_{3}=-1\right\}=\left[1∕8,2∕8\right)\cup \left[3∕8,4∕8\right)\cup \left[5∕8,6∕8\right)\cup \left[7∕8,8∕8\right)$. Consider the set $E=\left\{t\phantom{\rule{0.3em}{0ex}}:\phantom{\rule{0.3em}{0ex}}{\delta }_{1}=+1,{\delta }_{2}=-1,{\delta }_{3}=-1\right\}=\left[3∕8,4∕8\right)$. Then

$\frac{1}{8}=\mu \left[{r}_{1}={\delta }_{1},{r}_{2}={\delta }_{2},{r}_{3}={\delta }_{3}\right]=\mu \left[{r}_{1}={\delta }_{1}\right]\cdot \mu \left[{r}_{2}={\delta }_{2}\right]\cdot \mu \left[{r}_{3}={\delta }_{3}\right]=\frac{1}{2}\cdot \frac{1}{2}\cdot \frac{1}{2}.$

This gives another way to rewrite the proof of (4) that is basically the same as before:

$\begin{array}{llll}\hfill & \underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}\sum _{k=1}^{n}{c}_{k}{r}_{k}\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\sum _{{\delta }_{1},\dots ,{\delta }_{n}}exp\left(\mathrm{i}\sum _{k=1}^{n}{c}_{k}{\delta }_{k}\right)\cdot \mu \left[{r}_{1}={\delta }_{1}\right]\cdot \mu \left[{r}_{2}={\delta }_{2}\right]\cdots \phantom{\rule{0.3em}{0ex}},\mu \left[{r}_{n}={\delta }_{n}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\sum _{{\delta }_{1},\dots ,{\delta }_{n}}\prod _{k=1}^{n}{\mathrm{e}}^{\mathrm{i}{c}_{k}{\delta }_{k}}\prod _{k=1}^{n}\mu \left[{r}_{k}={\delta }_{k}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\sum _{{\delta }_{1},\dots ,{\delta }_{n}}\prod _{k=1}^{n}{\mathrm{e}}^{\mathrm{i}{c}_{k}{\delta }_{k}}\mu \left[{r}_{k}={\delta }_{k}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\prod _{k=1}^{n}\sum _{{\delta }_{k}}{\mathrm{e}}^{\mathrm{i}{c}_{k}{\delta }_{k}}\mu \left[{r}_{k}={\delta }_{k}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\prod _{k=1}^{\infty }\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}{c}_{k}{r}_{k}\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

#### Analytic Model of Coin Tossing

The following correspondence gives an analytic model of successive Bernoulli trials. That is, consider the following probability scenario. A fair coin is successively tossed $n$ times, with independence of tosses, coming up Heads or Tails on each toss. Use Table 2 to interpret this physical probability experiment analytically

 Coin tossing Analytic Model Symbol $H$ $+1$ Symbol $T$ $-1$ $k$th toss ${r}_{k\left(t\right)}$ Event set of $t$’s in $\left(0,1\right)$ Probability of an event Measure of the corresponding set of $t$’s

Table 2: Correspondence of terms in coin tossing and the analytic model.

Example. Consider an analytic model for the simplest probability problem for Bernoulli trials: Find the probability that in $n$ independent tosses of a fair coin, exactly $l$ will be heads. Using the table to translate the problem to analytic terms, the problem becomes: Find the measure of the set of $t$’s such that exactly $l$ of the $n$ numbers ${r}_{1}\left(t\right)$, ${r}_{2}\left(t\right)$, …, ${r}_{n}\left(t\right)$ are equal to $+1$.

To solve this problem start by noticing that having exactly $l$ of $n$ of the ${r}_{k}\left(t\right)$ being $+1$ means that $n-l$ are $-1$, so that

 ${r}_{1}\left(t\right)+{r}_{2}\left(t\right)+\cdots +{r}_{n}\left(t\right)=l-\left(n-1\right)=2l-n.$ (5)

Second, notice that

$\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{\mathrm{i}mx}\phantom{\rule{0.3em}{0ex}}dx=\frac{1}{2\pi }\left(\frac{{\mathrm{e}}^{\mathrm{i}m\cdot 2\pi }}{\mathrm{i}m}-\frac{{\mathrm{e}}^{0}}{\mathrm{i}m}\right)=0$

for $m\ne 0$ and so

$\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{\mathrm{i}mx}\phantom{\rule{0.3em}{0ex}}dx=\left\{\begin{array}{cc}1\phantom{\rule{1em}{0ex}}\hfill & m=0\hfill \\ 0\phantom{\rule{1em}{0ex}}\hfill & m\ne 0\hfill \end{array}\right\$

Therefore

$\varphi \left(t\right)=\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{\mathrm{i}x\left({r}_{1}\left(t\right)+{r}_{2}\left(t\right)+\cdots +{r}_{n}\left(t\right)-\left(2l-n\right)\right)}\phantom{\rule{0.3em}{0ex}}dx$

will equal $1$ on the set of $t$’s satisfying the condition (5), and is equal to $0$ otherwise. (Pay careful attention to the variable of integration.) Therefore

$\begin{array}{llll}\hfill & \mu \left[{r}_{1}\left(t\right)+{r}_{2}\left(t\right)+\cdots +{r}_{n}\left(t\right)=l-\left(n-l\right)=2l-n\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\underset{0}{\overset{1}{\int }}\varphi \left(t\right)\phantom{\rule{0.3em}{0ex}}dt\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\underset{0}{\overset{1}{\int }}\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{\mathrm{i}x\left({r}_{1}\left(t\right)+{r}_{2}\left(t\right)+\cdots +{r}_{n}\left(t\right)-\left(2l-n\right)\right)}\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{0.3em}{0ex}}dt\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{-\mathrm{i}\left(2l-n\right)x}\left(\underset{0}{\overset{1}{\int }}{\mathrm{e}}^{\mathrm{i}x\left({r}_{1}\left(t\right)+{r}_{2}\left(t\right)+\cdots +{r}_{n}\left(t\right)\right)}\phantom{\rule{0.3em}{0ex}}dt\right)\phantom{\rule{0.3em}{0ex}}dx.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

The interchange of order of integration is usually justiﬁed by Fubini’s Theorem, but that is not actually necessary here, since ${r}_{1}\left(t\right)+{r}_{2}\left(t\right)+\cdots +{r}_{n}\left(t\right)$ is a step function. Now use (3) on the inner integral with ${c}_{1}={c}_{2}=\cdots ={c}_{n}=x$ to obtain

$\mu \left[{r}_{1}\left(t\right)+{r}_{2}\left(t\right)+\cdots +{r}_{n}\left(t\right)=2l-n\right]=\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{-\mathrm{i}\left(2l-n\right)x}{cos}^{n}\left(x\right)\phantom{\rule{0.3em}{0ex}}dx.$

Evaluating this integral is in the problems, the result of the evaluation gives

$\mu \left[{r}_{1}\left(t\right)+{r}_{2}\left(t\right)+\cdots +{r}_{n}\left(t\right)=2l-n\right]=\frac{1}{{2}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{l}\right).$

#### Discussion of Independence of Events

The natural mathematical modeling assumption for probability is that events that seem unrelated are probabilistically independent. That is, the joint probability of unrelated events is the product of the individual probabilities. The product rule is not a mathematical necessity. Rather it is a modeling rule based on experiment and practical experience, justifying the multiplication of probabilities. Thus, probabilistic independence is an intuitive notion with the sense that the multiplication rule is applicable and useful.

In a landmark 1909 paper, “Sur les probabilités dénombrables et luer applications arithmétiques” E. Borel showed that binary digits, or equivalently the Rademacher functions, are independent in the sense that

$\underset{0}{\overset{1}{\int }}\prod _{k=1}^{\infty }exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt=\prod _{k=1}^{\infty }\underset{0}{\overset{1}{\int }}exp\left(\mathrm{i}x\frac{{r}_{k}\left(t\right)}{{2}^{k}}\right)\phantom{\rule{0.3em}{0ex}}dt.$

This observation gives well-deﬁned mathematical objects to which the postulates of probability theory apply directly.

#### Weak and Strong Laws

The application of the postulates of probability to Rademacher functions eliminates casting probability in terms of coins and tosses. All of the previous proofs of theorems for coin-tossing have equivalents using Rademacher functions, or equivalently binary digits, with Table 2. The Weak Law of Large Numbers, the direct Borel-Cantelli Lemma and the Strong Law of Large Numbers serve as examples.

Lemma 4 (Orthogonality of Rademacher Functions). If ${k}_{1}<{k}_{2}<\cdots <{k}_{n}$,

$\underset{0}{\overset{1}{\int }}{r}_{{k}_{1}}\left(t\right){r}_{{k}_{2}}\cdots {r}_{{k}_{n}}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=0.$

Remark. Consider the example in Figure 4. Figure 4: Graph of the product of Rademacher functions.

Each of the integrals over subintervals $\left[0,1∕4\right]$, $\left[1∕4,1∕2\right]$, $\left[1∕2,3∕4\right]$ and $\left[3∕4,1\right]$ are like a rescaled version of $±\underset{0}{\overset{1}{\int }}{r}_{1}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=0$, so the entire integral is $0$. The proof expands that idea using induction on the number of factors in the product.

Proof.

1. The proof is by induction on the number of factors $n$ in the product.
2. The base case for $n=1$ is obvious by the symmetry and regularity of ${r}_{k}\left(t\right)$:
$\underset{0}{\overset{1}{\int }}{r}_{{k}_{1}}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt.$

3. Suppose that the conclusion is true for $n-1$ factors,
$\underset{0}{\overset{1}{\int }}{r}_{{k}_{2}}\left(t\right){r}_{{k}_{3}}\left(t\right)\cdots {r}_{{k}_{n}}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=0$

and consider

$\underset{0}{\overset{1}{\int }}{r}_{{k}_{1}}\left(t\right){r}_{{k}_{2}}\left(t\right)\cdots {r}_{{k}_{n}}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt.$

4. Break the integral into ${2}^{{k}_{1}}$ integrals over the subintervals $\left[\ell ∕{2}^{{k}_{1}},\left(\ell +1\right)∕{2}^{{k}_{1}}\right]$, $\ell =0,\dots ,{2}^{k}-1$.
$\underset{0}{\overset{1}{\int }}{r}_{{k}_{1}}\left(t\right){r}_{{k}_{2}}\left(t\right)\cdots {r}_{{k}_{n}}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=\sum _{\ell =0}^{{2}^{{k}_{1}}-1}{\left(-1\right)}^{\ell }\underset{\ell ∕{2}^{{k}_{1}}}{\overset{\left(\ell +1\right)∕{2}^{{k}_{1}}}{\int }}{r}_{{k}_{2}}\left(t\right){r}_{{k}_{3}}\left(t\right)\cdots {r}_{{k}_{n}}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt.$

5. For each integral, change variables $t=\ell ∕{2}^{{k}_{1}}+{u}_{\ell }∕{2}^{{k}_{1}}$ for ${u}_{\ell }\in \left[0,1\right]$. Then the summation of integrals becomes
$\sum _{\ell =0}^{{2}^{{k}_{1}}-1}\frac{{\left(-1\right)}^{\ell }}{{2}^{{k}_{1}}}\underset{0}{\overset{1}{\int }}{r}_{{k}_{2}}\left(\frac{\ell }{{2}^{{k}_{1}}}+\frac{{u}_{\ell }}{{2}^{{k}_{1}}}\right){r}_{{k}_{3}}\left(\frac{\ell }{{2}^{{k}_{1}}}+\frac{{u}_{\ell }}{{2}^{{k}_{1}}}\right)\cdots {r}_{{k}_{n}}\left(\frac{\ell }{{2}^{{k}_{1}}}+\frac{{u}_{\ell }}{{2}^{{k}_{1}}}\right)\phantom{\rule{0.3em}{0ex}}d{u}_{\ell }.$

6. Using the recursive relation that
${r}_{{k}_{i}}\left(\frac{\ell }{{2}^{{k}_{1}}}+\frac{{u}_{\ell }}{{2}^{{k}_{1}}}\right)={r}_{{k}_{i}-{k}_{1}}\left(u\right)$

the summation becomes

$\sum _{\ell =0}^{{2}^{{k}_{1}}-1}\frac{{\left(-1\right)}^{\ell }}{{2}^{{k}_{1}}}\underset{0}{\overset{1}{\int }}{r}_{{k}_{2}}\left({u}_{\ell }\right){r}_{{k}_{3}}\left({u}_{\ell }\right)\cdots {r}_{{k}_{n}}\left({u}_{\ell }\right)\phantom{\rule{0.3em}{0ex}}d{u}_{\ell }.$

7. Each summand is $0$ by the induction hypothesis, so the sum is $0$ and the induction step is complete.

Theorem 5 (Weak Law of Large Numbers). For every $𝜖>0$,

$\underset{n\to \infty }{lim}\mu \left[|{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)|>𝜖n\right]=0$

Remark. The proof is essentially the standard one using Chebyshev’s inequality expressed directly in terms of the measure and integral of Rademacher functions.

Proof.

1. On the one hand, $\begin{array}{c}\underset{0}{\overset{1}{\int }}{\left({r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)\right)}^{2}\phantom{\rule{0.3em}{0ex}}dt\\ \ge \underset{|{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)|>𝜖n}{\int }{\left({r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)\right)}^{2}\phantom{\rule{0.3em}{0ex}}dt\\ >{𝜖}^{2}{n}^{2}\mu \left[|{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)|>𝜖n\right].\end{array}$

2. On the other hand, using the Lemma on Orthogonality of Rademacher Functions,
$\underset{0}{\overset{1}{\int }}{\left({r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)\right)}^{2}\phantom{\rule{0.3em}{0ex}}dt=n.$

3. Combining the two previous points,
$\mu \left[|{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)|>𝜖n\right]<\frac{1}{{𝜖}^{2}n}$

so

$\underset{n\to \infty }{lim}\mu \left[|{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)|>𝜖n\right]=0.$

Lemma 6. If ${f}_{n}\left(t\right)$ is a sequence of non-negative Lebesgue integrable functions, then convergence of $\sum _{n=1}^{\infty }\underset{0}{\overset{1}{\int }}{f}_{n}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt$ implies

$\underset{n\to \infty }{lim}{f}_{n}\left(t\right)=0$

almost everywhere.

Remark (Borel-Cantelli Direct Half). The proof is a direct translation of the probabilistic proof of the direct Borel-Cantelli Lemma, using an analytic version of Markov’s inequality.

Proof.

1. Let an arbitrary $a>0$ be given.
2. Since ${f}_{n}\left(t\right)\ge 0$ $\begin{array}{llll}\hfill \underset{0}{\overset{1}{\int }}{f}_{n}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt& =\underset{\left\{t\phantom{\rule{0.3em}{0ex}}:\phantom{\rule{0.3em}{0ex}}{f}_{n}\left(t\right)
3. Therefore $\sum _{n=1}^{\infty }\underset{0}{\overset{1}{\int }}{f}_{n}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt<\infty$ implies $\sum _{n=1}^{\infty }a\mu \left[{f}_{n}\left(t\right)\ge a\right]<\infty$.
4. In turn, that implies $\mu \left[{f}_{n}\left(t\right)\ge a\right]\to 0$ as $n\to \infty$.
5. Since this is true for any $a>0$,
$\underset{n\to \infty }{lim}{f}_{n}\left(t\right)=0$

almost everywhere.

Theorem 7 (Strong Law of Large Numbers). For almost every $t$ (that is, for all $t$ except for a set of measure $0$)

$\underset{n\to \infty }{lim}\frac{{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)}{n}=0.$

Remark. This proof resembles the third proof of the Strong Law in Strong Law of Large Numbers.. The proof substitutes the analytic form of the direct half of the Borel-Cantelli Lemma to show the convergence almost everywhere.

Proof.

1. Set
${f}_{n}\left(t\right)={\left(\frac{{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)}{n}\right)}^{4}$

and consider

$\underset{0}{\overset{1}{\int }}{f}_{n}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=\underset{0}{\overset{1}{\int }}{\left(\frac{{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)}{n}\right)}^{4}\phantom{\rule{0.3em}{0ex}}dt.$

2. Using the Orthogonality of Rademacher Functions
$\underset{0}{\overset{1}{\int }}{\left(\frac{{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)}{n}\right)}^{4}\phantom{\rule{0.3em}{0ex}}dt=\frac{n+\left(\genfrac{}{}{0.0pt}{}{4}{2}\right)\left(\genfrac{}{}{0.0pt}{}{n}{2}\right)}{{n}^{4}}\le \frac{C}{{n}^{2}}$

since the only terms that contribute positive values are the fourth powers and the pairs of squares.

3. Therefore
$\sum _{n=1}^{\infty }\underset{0}{\overset{1}{\int }}{f}_{n}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt<\infty .$

4. The proof now uses the preceding lemma on the analytic form of the Borel-Cantelli Lemma. With this lemma
$\underset{n\to \infty }{lim}{\left(\frac{{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)}{n}\right)}^{4}=0$

almost everywhere, and so

$\underset{n\to \infty }{lim}\frac{{r}_{1}\left(t\right)+\cdots +{r}_{n}\left(t\right)}{n}=0.$

almost everywhere.

Remark. Recall that ${r}_{k}\left(t\right)=1-2{𝜖}_{k}\left(t\right)$ and that ${𝜖}_{k}\left(t\right)$ is the $k$th bit in the binary expansion of $t$. Then the Strong Law of Large Numbers implies that for almost every $t$

$\underset{n\to \infty }{lim}\frac{{𝜖}_{1}\left(t\right)+\cdots +{𝜖}_{n}\left(t\right)}{n}=\frac{1}{2}.$

In other words, almost every number $t$ has asymptotically the same number of $0$’s and $1$’s. That is, we say that the binary expansion is normal.

#### Sources

This section is adapted from: Statistical Independence in Probability, Analysis, and Number Theory by Mark Kac, 1959, Mathematical Association of America, pages 1–35. Problems 2 and 3 are from the same source. The Cauchy criterion for uniform convergence is Theorem 7.8, page 134 in Principles of Mathematical Analysis second edition by W. Rudin, McGraw-Hill, 1964. The remarks about the sinc function are from Mathworld: Sinc function.

_______________________________________________________________________________________________ ### Algorithms, Scripts, Simulations

#### Scripts

__________________________________________________________________________ ### Problems to Work for Understanding

1. Show by direct integration that
$\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{-\mathrm{i}\left(0\right)x}{cos}^{0}\left(x\right)\phantom{\rule{0.3em}{0ex}}dx=1$

and

$\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{-\mathrm{i}\left(2\cdot 0-1\right)x}cos\left(x\right)\phantom{\rule{0.3em}{0ex}}dx=1∕2.$

and

$\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{-\mathrm{i}\left(2\cdot 1-1\right)x}cos\left(x\right)\phantom{\rule{0.3em}{0ex}}dx=1∕2.$

2. For $n\ge 2$ and $l\le n$, show that
$\frac{1}{2\pi }\underset{0}{\overset{2\pi }{\int }}{\mathrm{e}}^{-\mathrm{i}\left(2l-n\right)x}{cos}^{n}\left(x\right)\phantom{\rule{0.3em}{0ex}}dx=\frac{1}{{2}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{l}\right).$

3. Every $t$ with $0\le t<1$ has a unique ternary expansion,
 $t=\frac{{\eta }_{1}\left(t\right)}{2}+\frac{{\eta }_{2}\left(t\right)}{{2}^{2}}+\frac{{\eta }_{3}\left(t\right)}{{2}^{3}}+\dots$ (6)

where ${\eta }_{k}\left(t\right)=0,1$ or $2$. Prove that ${\eta }_{i}\left(t\right)$ is independent of ${\eta }_{j}\left(t\right)$.

4. Prove that
$\frac{sin\left(x\right)}{x}=\prod _{n=1}^{\infty }\frac{1+2cos\left(\left(2x\right)∕{3}^{k}\right)}{3}$

5. For $0 let
${T}_{p}\left(t\right)=\left\{\begin{array}{cc}t∕p,\phantom{\rule{1em}{0ex}}\hfill & 0\le t\le p\hfill \\ \left(t-p\right)∕\left(1-p\right)\phantom{\rule{1em}{0ex}}\hfill & p

and let

${𝜖}_{p}\left(t\right)=\left\{\begin{array}{cc}1\phantom{\rule{1em}{0ex}}\hfill & 0\le t\le p,\hfill \\ 0,\phantom{\rule{1em}{0ex}}\hfill & p

Plot the functions ${𝜖}_{1}^{\left(p\right)}\left(t\right)={𝜖}_{p}\left(t\right)$, ${𝜖}_{2}^{\left(p\right)}\left(t\right)={𝜖}_{p}\left({T}_{p}\left(t\right)\right)$, ${𝜖}_{3}^{\left(p\right)}\left(t\right)={𝜖}_{p}\left({T}_{p}\left({T}_{p}\left(t\right)\right)\right)$ and so on, and show that they are independent. This is the basis for an analytic model of the unfair coin. See the next problem.

6. Prove that the measure of the set on which
${𝜖}_{1}^{\left(p\right)}+{𝜖}_{2}^{\left(p\right)}+\cdots +{𝜖}_{n}^{\left(p\right)}=l$

where $0\le l\le n$ is

$\left(\genfrac{}{}{0.0pt}{}{n}{l}\right){p}^{l}{\left(1-p\right)}^{n-l}.$

__________________________________________________________________________ ### References

   Mark Kac. Statistical Independence in Probability, Analysis and Number Theory, volume 12 of The Carus Mathematical Monographs. Mathematical Association of America, 1959.

   Walter Rudin. Principles of Mathematical Analysis. McGraw-Hill, 1964.

__________________________________________________________________________ __________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.