Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar

__________________________________________________________________________

Series with Random Signs

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

### Rating

Mathematicians Only: prolonged scenes of intense rigor.

_______________________________________________________________________________________________

### Section Starter Question

What is the diﬀerence between conditional convergence and absolute convergence of a series? Give an example of a conditionally convergent series and an absolutely convergent series.

_______________________________________________________________________________________________

### Key Concepts

1. The harmonic series $\sum _{n=1}^{\infty }\frac{1}{n}$ diverges and the alternating harmonic series $\sum _{n=1}^{\infty }\frac{{\left(-1\right)}^{n+1}}{n}$ converges to $ln2$.
2. For independent random variables ${X}_{1},{X}_{2},\dots$ the sequence of partial sums $\sum _{j=1}^{n}{X}_{n}$ converges in probability if and only if $\sum _{j=1}^{n}{X}_{n}$ converges almost surely.
3. Let ${Y}_{k}$ be a random variable taking on the value $+1$ with probability $\frac{1}{2}$ or $-1$ with probability $\frac{1}{2}$. If $\sum _{k=1}^{\infty }{a}_{k}^{2}$ converges, then $\sum _{k=1}^{\infty }{Y}_{k}{a}_{k}$ converges almost surely.

__________________________________________________________________________

### Vocabulary

1. The harmonic series $\sum _{n=1}^{\infty }\frac{1}{n}$ diverges and the alternating harmonic series $\sum _{n=1}^{\infty }\frac{{\left(-1\right)}^{n+1}}{n}$ converges to $ln2$.
2. Let ${Y}_{k}$ be a random variable taking on the value $+1$ with probability $\frac{1}{2}$ or $-1$ with probability $\frac{1}{2}$. The random harmonic series is
$\sum _{n=1}^{\infty }\frac{{Y}_{n}}{n}$

__________________________________________________________________________

### Mathematical Ideas

#### Alternating Harmonic Series and Rearrangements

The harmonic series $\sum _{n=1}^{\infty }\frac{1}{n}$ diverges and the alternating harmonic series $\sum _{n=1}^{\infty }\frac{{\left(-1\right)}^{n+1}}{n}$ converges to $ln2$.

As an aside, from a theorem due to Riemann: if $\alpha$ and $\beta$ are extended real numbers with $\alpha \le \beta$, then there exists a rearrangement of the alternating harmonic series $\sum _{n=1}^{\infty }{a}_{n}^{\prime }$ with partial sums ${s}_{n}^{\prime }$such that

$liminf\underset{n}{\overset{\prime }{s}}=\alpha ,\phantom{\rule{1em}{0ex}}limsup\underset{n}{\overset{\prime }{s}}=\beta .$

For example, consider the rearrangement

$1+\frac{1}{3}-\frac{1}{2}+\frac{1}{5}+\frac{1}{7}-\frac{1}{4}+\frac{1}{9}+\frac{1}{11}-\frac{1}{6}+\dots$

in which two positive odd-denominator terms are always followed by one negative even-denominator term. Since

$\frac{1}{4k-3}+\frac{1}{4k-1}-\frac{1}{2k}=\frac{8k-3}{32{k}^{3}-32{k}^{2}+6k}>0$

then ${s}_{3}^{\prime }<{s}_{6}^{\prime }<{s}_{9}^{\prime }<\dots$ where ${s}_{n}^{\prime }$ is the $n$th partial sum of the rearranged series. Then $limsup{s}_{n}^{\prime }>\frac{5}{6}$. In fact, using symbolic algebra software

$\sum _{k=1}^{\infty }\frac{8k-3}{32{k}^{3}-32{k}^{2}+6k}=\frac{3ln\left(2\right)}{2}\approx 1.03972.$

The dichotomy between the divergence of the harmonic series and the convergence of the alternating harmonic series along with the potential ﬁnite sum of rearrangements of the alternating harmonic series suggests consideration of random harmonic series where coin ﬂips determine each sign.

#### Random Series

Let ${Y}_{k}$ be a random variable taking on the value $+1$ with probability $\frac{1}{2}$ or $-1$ with probability $\frac{1}{2}$. The random harmonic series is

$\sum _{n=1}^{\infty }\frac{{Y}_{n}}{n}.$

The convergence of the random harmonic series

$\sum _{n=1}^{\infty }\frac{{Y}_{n}}{n}$

is a tail event. Therefore, the probability of convergence is either $0$ or $1$. More generally, the question is to characterize series such that

$\sum _{n=1}^{\infty }{Y}_{n}{a}_{n}$

converges almost surely. According to M. Kac, H. Steinhaus and N. Wiener posed this question independently in 1922. However, H. Rademacher had already provided the answer in a 1922 paper. Even more generally, consider the sequence of partial sums ${S}_{n}=\sum _{k=1}^{n}{X}_{n}$ of independent random variables. The convergence is again a tail event. The basic question is to characterize series such that

$\sum _{n=1}^{\infty }{X}_{n}$

converges almost surely.

Lemma 1. Let ${S}_{1}$, ${S}_{2}$, …, ${S}_{N}$ be successive sums of independent random variables such that $\underset{j\le N}{sup}ℙ\left[|{S}_{N}-{S}_{j}|>\alpha \right]=c<1$. Then

 $ℙ\left[\underset{j\le N}{sup}|{S}_{j}|>2\alpha \right]\le \frac{1}{1-c}ℙ\left[|{S}_{N}|>\alpha \right]$ (1)

Remark. Breiman credits this lemma to Skorokhod. This inequality is like other inequalities in probability theory, for example, Kolmogorov’s Inequality for independent random variables such that $𝔼\left[{X}_{n}\right]=0$ and $𝔼\left[{X}_{n}^{2}\right]<\infty$

$ℙ\left[\underset{k\le n}{max}|{S}_{k}|>𝜖\right]\le \frac{1}{{𝜖}^{2}}\sum _{1}^{n}𝔼\left[{X}_{k}^{2}\right]$

see [2, Theorem 5.3.1, page 118]. It is also like submartingale inequalities, [2, Theorem, 9.4.1, page 331]. If ${X}_{n}$ is a submartingale, then for each real $\lambda$

$\lambda ℙ\left[\underset{1\le k\le n}{max}{X}_{k}\ge \lambda \right]\le \underset{\underset{1\le k\le n}{max}{X}_{k}\ge \lambda }{\int }{X}_{n}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ\le 𝔼\left[{X}_{n}^{+}\right].$

Under the condition $𝔼\left[{X}_{n}\right]=0$, the sequence of sums is a martingale, so the submartingale inequality would apply but $ℙ\left[|{S}_{N}|>\alpha \right]$ is not immediately comparable to $\underset{\underset{1\le k\le n}{max}{X}_{k}\ge \lambda }{\int }{X}_{n}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ$, so the resemblance is about the term $ℙ\left[\underset{1\le k\le n}{max}{X}_{k}\ge \lambda \right]$ and the lemma is a diﬀerent inequality. Breiman [1, Proposition 5.13] has a slightly weaker submartingale inequality proved with the Optional Sampling Theorem: For $\lambda >0$

$\lambda ℙ\left[\underset{1\le k\le n}{max}{X}_{k}\ge \lambda \right]\le 𝔼\left[|{X}_{n}|\right].$

Proof.

1. For each sample point $\omega$ let ${j}^{\ast }\left(\omega \right)$ be the ﬁrst $j$ such that $|{S}_{j}|>2\alpha$. Then $\begin{array}{llll}\hfill ℙ\left[|{S}_{N}|>\alpha ,\underset{j\le N}{sup}|{S}_{j}|>2\alpha \right]& =\sum _{j=1}^{N}ℙ\left[|{S}_{N}|>\alpha ,{j}^{\ast }=j\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \ge \sum _{j=1}^{N}ℙ\left[|{S}_{N}-{S}_{j}|\le \alpha ,{j}^{\ast }=j\right].\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

(See the problems for a justiﬁcation of the last inequality.)

2. The set $\left\{{j}^{\ast }=j\right\}$ is in $\mathsc{ℱ}\left({X}_{1},\dots ,{X}_{j}\right)$ and ${S}_{N}-{S}_{j}$ is measurable with respect to $\mathsc{ℱ}\left({X}_{j+1},\dots ,{X}_{N}\right)$ so the last sum on the right above is
$\sum _{j=1}^{N}ℙ\left[|{S}_{N}-{S}_{j}|\le \alpha \right]\cdot ℙ\left[{j}^{\ast }=j\right].$

3. Use $ℙ\left[|{S}_{N}-{S}_{j}|\le \alpha \right]\ge 1-c$ to get $\begin{array}{llll}\hfill ℙ\left[|{S}_{N}|>\alpha ,\underset{j\le N}{sup}|{S}_{j}|>2\alpha \right]& \ge \left(1-c\right)\sum _{j=1}^{N}ℙ\left[{j}^{\ast }=j\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\left(1-c\right)ℙ\left[\underset{j\le N}{sup}|{S}_{j}|\ge 2\alpha \right].\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
4. Finally
$ℙ\left[|{S}_{N}|>\alpha \right]\ge ℙ\left[|{S}_{N}|>\alpha ,\underset{j\le N}{sup}|{S}_{j}|>2\alpha \right]$

so

$ℙ\left[|{S}_{N}|>\alpha \right]\ge \left(1-c\right)ℙ\left[\underset{j\le N}{sup}|{S}_{j}|\ge 2\alpha \right].$

Dividing by $1-c$ completes the proof.

Theorem 2. For independent random variables ${X}_{1},{X}_{2},\dots$ the sequence of partial sums $\sum _{j=1}^{n}{X}_{n}$ converges in probability if and only if $\sum _{j=1}^{n}{X}_{n}$ converges almost surely.

Remark. Chung credits this theorem to P. Lévy.

Proof.

$⇐$
Since almost sure convergence implies convergence in probability, the implication is automatic.
$⇒$
1. Assume the sequence of partial sums does not converge almost surely.
2. If a sequence ${s}_{n}$ of numbers does not converge, then there exists an $𝜖>0$ such that for every $m$
$\underset{n>m}{sup}|{s}_{n}-{s}_{m}|>𝜖$

so if $\sum _{k=1}^{n}{X}_{n}$ diverges with positive probability then there exists an $𝜖>0$ and a $\delta >0$ such that for every ﬁxed $m$,

$ℙ\left[\underset{n>m}{sup}\left|\sum _{k=m+1}^{n}{X}_{n}\right|>𝜖\right]\ge \delta .$

3. Using (1) from Lemma 1
$ℙ\left[\underset{m𝜖\right]\le \frac{1}{1-{C}_{m,N}}ℙ\left[\left|\sum _{k=m+1}^{N}{X}_{k}\right|>\frac{𝜖}{2}\right]$

where

${C}_{m,N}=\underset{m\frac{𝜖}{2}\right].$

4. If $\sum _{k=1}^{N}{X}_{k}$ is convergent in probability, then
$\sum _{k=1}^{N}{X}_{k}-\sum _{k=1}^{m}{X}_{k}=\sum _{k=m+1}^{N}{X}_{k}\stackrel{ℙ}{\to }0.$

That is, as $m,N\to \infty$,

$ℙ\left[\left|\sum _{k=m+1}^{N}{X}_{k}\right|>\frac{𝜖}{2}\right]=0.$

5. Therefore ${C}_{m,N}\to 0$. Taking $N\to \infty$
$\underset{m\to 0}{lim}ℙ\left[\underset{n>m}{sup}\left|\sum _{k=m+1}^{n}{X}_{k}\right|>0\right]=0.$

The contradiction proves this direction of the implication.

Corollary 1. For independent random variables ${X}_{1},{X}_{2},\dots$ with $𝔼\left[{X}_{k}\right]=0$ for all $k$ and $\sum _{k=1}^{\infty }𝔼\left[{X}_{k}^{2}\right]<\infty$, then $\sum _{k=1}^{\infty }{X}_{k}$ converges almost surely.

Remark. This corollary provides the convergence criterion for the general random signs for series problem stated at the start of this section.

Proof. Using the independence,

$𝔼{\left[\sum _{k=1}^{\infty }{X}_{k}^{2}\right]}^{2}=\sum _{k=1}^{\infty }𝔼\left[{X}_{k}^{2}\right]<\infty .$

Then by [1, equation 2.46], convergence in second moment implies convergence in probability. By Theorem 2 convergence in probability with these conditions implies almost sure convergence. □

Corollary 2. If $\sum _{k=1}^{\infty }{a}_{k}^{2}$ converges, then $\sum _{k=1}^{\infty }{Y}_{k}{a}_{k}$ converges almost surely.

The following is a partial converse of Theorem 2:

Theorem 3. Let ${X}_{1},{X}_{2},\dots$ be independent random variables such that $𝔼\left[{X}_{k}\right]=0$ and $|{X}_{k}|\le \alpha <\infty$ for all $k$. Then the almost sure convergence of $\sum _{k=1}^{n}{X}_{k}$ implies $\sum _{k=1}^{\infty }𝔼\left[{X}_{k}^{2}\right]<\infty$.

Proof.

1. For any $\lambda >0$, deﬁne ${n}^{\left(\lambda \right)}\left(\omega \right)$ by

so that ${n}^{\left(\lambda \right)}$ is an extended integer random variable.

2. For any integer $N$ and $j, consider
$\underset{{n}^{\left(\lambda \right)}=j}{\int }{\left(\sum _{k=1}^{N}{X}_{k}\right)}^{2}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ.$

3. Since $\left\{{n}^{\left(\lambda \right)}=j\right\}\in \mathsc{ℱ}\left({X}_{1},\dots ,{X}_{j}\right)$, then by independence and since $𝔼\left[{X}_{k}\right]=0$ for all $k$
$\underset{{n}^{\left(\lambda \right)}=j}{\int }{\left(\sum _{k=1}^{N}{X}_{k}\right)}^{2}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ=\underset{{n}^{\left(\lambda \right)}=j}{\int }{\left(\sum _{k=1}^{j}{X}_{k}\right)}^{2}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ+\sum _{k=j+1}^{N}\underset{{n}^{\left(\lambda \right)}=j}{\int }{X}_{k}^{2}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ.$

4. On the set $\left\{{n}^{\left(\lambda \right)}=j\right\}$, $\left|\sum _{k=1}^{j-1}{X}_{k}\right|\le \lambda$. Since $|{X}_{j}|\le \alpha$, $\left|\sum _{k=1}^{j}{X}_{k}\right|\le \lambda +\alpha$.
5. By independence, for $k>j$
$\underset{{n}^{\left(\lambda \right)}=j}{\int }{X}_{k}^{2}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ=ℙ\left[{n}^{\left(\lambda \right)}=j\right]𝔼\left[{X}_{k}^{2}\right].$

6. Using these two estimates,
$\underset{{n}^{\left(\lambda \right)}=j}{\int }{\left(\sum _{k=1}^{N}{X}_{k}\right)}^{2}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ\le {\left(\lambda +\alpha \right)}^{2}ℙ\left[{n}^{\left(\lambda \right)}=j\right]+ℙ\left[{n}^{\left(\lambda \right)}=j\right]\sum _{k=1}^{N}𝔼\left[{X}_{k}^{2}\right].$

7. Sum on this expression from $j=1$ to $j=N$ to get
$\underset{{n}^{\left(\lambda \right)}\le N}{\int }{\left(\sum _{k=1}^{N}{X}_{k}\right)}^{2}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ\le {\left(\lambda +\alpha \right)}^{2}ℙ\left[{n}^{\left(\lambda \right)}\le j\right]+ℙ\left[{n}^{\left(\lambda \right)}\le N\right]\sum _{k=1}^{N}𝔼\left[{X}_{k}^{2}\right].$

8. Also $\begin{array}{llll}\hfill \underset{{n}^{\left(\lambda \right)}>N}{\int }{\left(\sum _{k=1}^{N}{X}_{k}\right)}^{2}\phantom{\rule{0.3em}{0ex}}\mathrm{d}ℙ& \le {\lambda }^{2}ℙ\left[{n}^{\left(\lambda \right)}>N\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \le {\left(\lambda +\alpha \right)}^{2}ℙ\left[{n}^{\left(\lambda \right)}>N\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$
9. Add this to the previous inequality to get
$\sum _{k=1}^{N}𝔼\left[{X}_{k}^{2}\right]\le {\left(\lambda +\alpha \right)}^{2}+ℙ\left[{n}^{\left(\lambda \right)}\le N\right]\sum _{k=1}^{N}𝔼\left[{X}_{k}^{2}\right]$

or

$ℙ\left[{n}^{\left(\lambda \right)}>N\right]\sum _{k=1}^{N}𝔼\left[{X}_{k}^{2}\right]\le {\left(\lambda +\alpha \right)}^{2}.$

10. Letting $N\to \infty$,
$ℙ\left[{n}^{\left(\lambda \right)}=\infty \right]\sum _{k=1}^{\infty }𝔼\left[{X}_{k}^{2}\right]\le {\left(\lambda +\alpha \right)}^{2}.$

11. But since $\sum _{k=1}^{\infty }{X}_{k}$ converges almost surely there must exist a $\lambda$ such that
$ℙ\left[\underset{n}{sup}\left|\sum _{k=1}^{n}{X}_{k}\right|\le \lambda \right]>0$

implying $ℙ\left[{n}^{\left(\lambda \right)}=\infty \right]$ and therefore $\sum _{k=1}^{\infty }𝔼\left[{X}_{k}^{2}\right]<\infty$.

#### Analytic Proof

This subsection uses the analytic framework of Analytic Model. making essential use of Rademacher functions.

Theorem 4. If $\sum _{k=1}^{\infty }{a}_{k}^{2}$ converges, then $\sum _{k=1}^{\infty }{Y}_{k}{a}_{k}$ converges with probability $1$.

Remark. The probability of the convergence of

$\sum _{n=1}^{\infty }{Y}_{n}{a}_{n}$

is equivalent to ﬁnding the measure on $\left[0,1\right)$ of the set of $t$’s for which

$\sum _{n=1}^{\infty }{a}_{n}{r}_{n}\left(t\right)$

converges. The proof below adapts the proof by Kac, which in turn follows Paley and Zygmund. The proof depends on two well-known theorems from analysis and function theory.

Theorem 5 (Riesz-Fisher Theorem). Assume:

1. $\sum _{n=1}^{\infty }{a}_{k}^{2}$ converges.
2. The sequence of functions ${\varphi }_{1}\left(t\right),{\varphi }_{2}\left(t\right),\dots$ are an orthonormal sequence on $\left[0,1\right)$, that is,
$\underset{0}{\overset{1}{\int }}{\varphi }_{j}\left(t\right)\cdot {\varphi }_{k}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt={\delta }_{jk}.$

Then: There exists a function $f\left(t\right)\in {L}^{2}\left[0,1\right]$ such that

$\underset{n\to \infty }{lim}\underset{0}{\overset{1}{\int }}{\left(f\left(t\right)-\sum _{k=1}^{n}{a}_{k}{\varphi }_{k}\left(t\right)\right)}^{2}\phantom{\rule{0.3em}{0ex}}dt.$

Remark. Since this is a purely analytic theorem independent of probability, this important theorem is just stated without proof. Advanced analysis texts will have a statement and proof, for example [8, page 89].

Theorem 6. Assume:

1. $\underset{0}{\overset{1}{\int }}|f\left(t\right)|\phantom{\rule{0.3em}{0ex}}dt<\infty ,$

2. Sequences ${\alpha }_{m}$ and ${\beta }_{m}$ satisfy ${\alpha }_{m}<{t}_{0}<{\beta }_{m}$ for all $m$ with
$\underset{m\to \infty }{lim}{\alpha }_{m}={t}_{0}=\underset{m\to \infty }{lim}{\beta }_{m}.$

Then: For almost every ${t}_{0}\in \left[0,1\right)$

$\underset{m\to \infty }{lim}\frac{1}{{\beta }_{m}-{\alpha }_{m}}\underset{{\alpha }_{m}}{\overset{{\beta }_{m}}{\int }}f\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=f\left({t}_{0}\right).$

Remark. This theorem is an alternate statement of the fact that an indeﬁnite integral is absolutely continuous. It is an advanced form of the Fundamental Theorem of Calculus. Most advanced analysis texts will have a statement and proof, for example [8, Theorem 8,17, page 176] or [6, Proposition 4.13, page 85; Theorem 5.13, page 10 6].

Proof of Theorem 4.

1. By the Lemma on Orthogonality of Rademacher Functions in Analytic Model.
$\underset{0}{\overset{1}{\int }}{r}_{j}\left(t\right){r}_{k}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=0.$

2. Therefore by the Reisz-Fisher Theorem, there exists a function $f\left(t\right)\in {L}^{2}\left[0,1\right]$ such that
$\underset{n\to \infty }{lim}\underset{0}{\overset{1}{\int }}{\left(f\left(t\right)-\sum _{k=1}^{n}{a}_{k}{r}_{k}\left(t\right)\right)}^{2}\phantom{\rule{0.3em}{0ex}}dt=0.$

3. Note ﬁrst that $f\left(t\right)\in {L}^{2}\left[0,1\right]$ implies that $\underset{0}{\overset{1}{\int }}|f\left(t\right)|\phantom{\rule{0.3em}{0ex}}dt<\infty$.
4. Now except for the possibility that ${t}_{0}$ is a dyadic rational (a countable set, therefore a set of measure $0$), deﬁne ${k}_{m}$ and consequently deﬁne ${\alpha }_{m}$ and ${\beta }_{m}$ by
${\alpha }_{m}=\frac{{k}_{m}}{{2}^{m}}<{t}_{0}<\frac{{k}_{m}+1}{{2}^{m}}={\beta }_{m}.$

5. Then with the Cauchy-Schwarz Inequality $\begin{array}{c}\left|\underset{{\alpha }_{m}}{\overset{{\beta }_{m}}{\int }}\left(f\left(t\right)-\sum _{k=1}^{n}{a}_{k}{r}_{k}\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt\right|\\ \le {\left({\beta }_{m}-{\alpha }_{m}\right)}^{1∕2}{\left(\underset{0}{\overset{1}{\int }}{\left(f\left(t\right)-\sum _{k=1}^{n}{a}_{k}{r}_{k}\left(t\right)\right)}^{2}\phantom{\rule{0.3em}{0ex}}dt\right)}^{1∕2}.\end{array}$

6. Therefore,
$\underset{{\alpha }_{m}}{\overset{{\beta }_{m}}{\int }}f\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=\sum _{k=1}^{\infty }{a}_{k}\underset{{\alpha }_{m}}{\overset{{\beta }_{m}}{\int }}{r}_{k}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt.$

7. Note that
$\underset{{\alpha }_{m}}{\overset{{\beta }_{m}}{\int }}{r}_{k}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=0,\phantom{\rule{2em}{0ex}}k>m$

and

$\underset{{\alpha }_{m}}{\overset{{\beta }_{m}}{\int }}{r}_{k}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt=\left({\beta }_{m}-{\alpha }_{m}\right){r}_{k}\left({t}_{0}\right),\phantom{\rule{2em}{0ex}}k\le m.$

so

$\frac{\underset{{\alpha }_{m}}{\overset{{\beta }_{m}}{\int }}{r}_{k}\left(t\right)\phantom{\rule{0.3em}{0ex}}dt}{{\beta }_{m}-{\alpha }_{m}}=\sum _{k=1}^{m}{a}_{k}{r}_{k}\left({t}_{0}\right).$

Then $\sum _{k=1}^{m}{a}_{k}{r}_{k}\left({t}_{0}\right)$ converges to $f\left({t}_{0}\right)$.

Theorem 7 (Converse). If $\sum _{k=1}^{\infty }{a}_{k}^{2}$ diverges, then $\sum _{k=1}^{\infty }{Y}_{k}{a}_{k}$ diverges with probability $1$.

Remark. Again the probability that $\sum _{k=1}^{\infty }{Y}_{k}{a}_{k}$ converges is the measure of the set of $t$’s such that

 $\sum _{k=1}^{\infty }{a}_{k}{r}_{k}\left(t\right)$ (2)

converges.

Remark. Extend the deﬁnition of ${r}_{k}\left(t\right)$ to be periodic with period $1$, so ${r}_{k}\left(t+1\right)={r}_{k}\left(t\right)$. Then ${r}_{k}\left(t\right)={r}_{1}\left({2}^{k-1}t\right)$. The ﬁrst claim is that if $t$ is a point of convergence, then so is $t+\frac{1}{{2}^{l}}$ for $l=0,1,2,\dots$. The claim follows by noting that replacing $t$ by $t+\frac{1}{{2}^{l}}$ only changes a ﬁnite number of terms of the series. Another way to say this is to note that the indicator function of the convergence set has arbitrarily small periods. A lemma of analysis below says that a function with arbitrarily small periods must be constant almost everywhere. In this case the constant is either $0$ or $1$. Note that the characteristic function of the set of convergence of (2) is measurable since the set of convergence of a series of measurable functions is measurable.

Lemma 8. If a bounded measurable function $\varphi \left(t\right)$ is periodic with arbitrarily small periods, then $\varphi \left(t\right)$ is constant.

Proof.

1. Set $\begin{array}{lll}\hfill I=\underset{0}{\overset{1}{\int }}\varphi \left(t\right)\phantom{\rule{0.3em}{0ex}}dt=\sum _{k=0}^{{2}^{\ell }-1}\underset{k∕{2}^{\ell }}{\overset{\left(k+1\right)∕{2}^{\ell }}{\int }}\varphi \left(t\right)\phantom{\rule{0.3em}{0ex}}dt={2}^{\ell }\underset{k∕{2}^{\ell }}{\overset{\left(k+1\right)∕{2}^{\ell }}{\int }}\varphi \left(t\right)\phantom{\rule{0.3em}{0ex}}dt.& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$
2. Let ${t}_{0}$ be such that ${k}_{\ell }∕{2}^{\ell }<{t}_{0}<\left({k}_{\ell }+1\right)∕{2}^{\ell }$. From Lemma 6, for almost every ${t}_{0}$
${2}^{\ell }\underset{k∕{2}^{\ell }}{\overset{\left(k+1\right)∕{2}^{\ell }}{\int }}\varphi \left(t\right)\phantom{\rule{0.3em}{0ex}}dt=\varphi \left({t}_{0}\right).$

3. Thus $\varphi \left({t}_{0}\right)=I$ for almost every ${t}_{0}$. If $\varphi \left(t\right)$ is not assumed to be bounded, then apply the argument to ${\mathrm{e}}^{i\varphi \left(t\right)}$.

Proof of Converse.

1. To prove that $\sum _{k=1}^{\infty }{Y}_{k}{a}_{k}$ diverges, assume without loss of generality that $\underset{k\to \infty }{lim}{a}_{k}=0$. Also for purposes of proof by contradiction, assume that the measure of the set of convergence is positive.
2. Then there is a measurable function $g\left(t\right)$ such that
$\underset{n\to \infty }{lim}\sum _{k=1}^{\infty }{a}_{k}{r}_{k}\left(t\right)=g\left(t\right)$

almost everywhere. Then equivalently

$\underset{n\to \infty }{lim}exp\left(i\xi \sum _{k=1}^{n}{a}_{k}{r}_{k}\left(t\right)\right)=exp\left(i\xi g\left(t\right)\right)$

almost everywhere.

3. By Lebesgue’s Theorem on Dominated Convergence,
$\underset{n\to \infty }{lim}\underset{0}{\overset{1}{\int }}exp\left(i\xi \sum _{k=1}^{n}{a}_{k}{r}_{k}\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt=\underset{0}{\overset{1}{\int }}exp\left(i\xi g\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt.$

4. But
$\underset{0}{\overset{1}{\int }}exp\left(i\xi \sum _{k=1}^{n}{a}_{k}{r}_{k}\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt=\prod _{k=1}^{n}cos\left(\xi {a}_{k}\right).$

5. The assumptions of $\sum _{k=1}^{\infty }{a}_{k}^{2}=\infty$ and ${a}_{k}\to 0$ imply (see the problems for a proof)
$\underset{n\to \infty }{lim}\prod _{k=1}^{n}cos\left(\xi {a}_{k}\right)=0.$

6. Therefore
$\underset{0}{\overset{1}{\int }}exp\left(i\xi g\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt=0$

for every real $\xi \ne 0$.

7. Now take a sequence ${\xi }_{n}\to 0$ with ${\xi }_{n}\ne 0$ for all $n$ (for example, ${\xi }_{n}=\frac{1}{n}$.) Then $\underset{n\to \infty }{lim}{\xi }_{n}g\left(t\right)=0$, for almost every $t$, and so $\underset{n\to \infty }{lim}{\mathrm{e}}^{{\xi }_{n}g\left(t\right)}=1$, for almost every $t$.
8. Again by Lebesgue’s Theorem on Dominated Convergence,
$\underset{n\to \infty }{lim}\underset{0}{\overset{1}{\int }}{\mathrm{e}}^{{\xi }_{n}g\left(t\right)}\phantom{\rule{0.3em}{0ex}}dt=1$

so $0=1$, the desired contradiction.

9. That is $\sum _{k=1}^{\infty }{Y}_{k}{a}_{k}$ diverges almost everywhere.

#### Distribution of the Random Harmonic Sum

Let

${S}_{n}=\sum _{k=1}^{n}\frac{{Y}_{k}}{k}$

be the partial sums of the random harmonic series, which converges almost surely to a random variable of $S$ by Rademacher’s theorem. The probability distribution of each summand is

${\mu }_{n}=\frac{1}{2}\left({\delta }_{1∕k}+{\delta }_{-1∕k}\right)$

where ${\delta }_{x}$ is the Dirac delta or point mass distribution. Using the fact that the distribution of a sum of random variables is the convolution of the distribution summands, the distribution of ${S}_{n}$ is

$\star \prod _{k=1}^{n}\frac{1}{2}\left({\delta }_{1∕k}+{\delta }_{-1∕k}\right)$

where the preceding star indicates repeated convolution. Now take the characteristic function (or Fourier transform) of the distribution of ${S}_{n}$. Using the fact that the characteristic function of a convolution of two distributions is the product of the characteristic functions

$\begin{array}{lll}\hfill \mathsc{ℱ}\left[{S}_{n}\right]=\prod _{k=1}^{n}\mathsc{ℱ}\left[\frac{1}{2}\left({\delta }_{1∕k}+{\delta }_{-1∕k}\right)\right]=\prod _{k=1}^{n}cos\left(\frac{x}{k}\right).& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$

This product converges uniformly on bounded subsets as $n\to \infty$, and using a theorem from characteristic functions (for example, [1, Theorem 8,28, page 171] or [2, Theorem 6.3.2, page 161]) the sequence of probability measures converges in distribution to a probability measure $\mu$ of the random variable $S$. That is,

$\begin{array}{llll}\hfill f\left(t\right)& ={\mathsc{ℱ}}^{-1}\left[\prod _{k=1}^{\infty }cos\left(\frac{x}{k}\right)\right]\left(t\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{2\pi }\underset{-\infty }{\overset{\infty }{\int }}{\mathrm{e}}^{-\mathrm{i}tx}\prod _{k=1}^{n}cos\left(\frac{x}{k}\right)\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{2\pi }\underset{-\infty }{\overset{\infty }{\int }}\left(cos\left(tx\right)+\mathrm{i}sin\left(tx\right)\right)\prod _{k=1}^{n}cos\left(\frac{x}{k}\right)\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{\pi }\underset{0}{\overset{\infty }{\int }}cos\left(tx\right)\prod _{k=1}^{n}cos\left(\frac{x}{k}\right)\phantom{\rule{0.3em}{0ex}}dx.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

There does not seem to be a closed form of this integral representation for $f\left(t\right)$ and so use numerical integration to evaluate an approximation of the function, see Figure 1. See the Algorithms and Scripts section for details.

Note that the distribution of sums is symmetric about $0$ as expected. The distribution is ﬂat near the maximum and numerically close to $1∕4$. This suggests that

$\underset{0}{\overset{\infty }{\int }}\prod _{k=1}^{\infty }cos\left(\frac{x}{k}\right)\phantom{\rule{0.3em}{0ex}}dx=\frac{\pi }{4}$

although there seems to no proof of this integral.

#### Simulations of the Distribution of the Random Harmonic Sum

Simulation of the distribution of the random sums is easy. Figure 2 is a probability histogram of $150,000$ sums of $\sum _{k=1}^{1000}{Y}_{k}∕k$. See the Algorithms and Scripts section for details. The values of ${Y}_{k}$ are $±1$ with equal probabilities $1∕2$. The probability histogram resembles the numerical simulation of the distribution.

#### Sources

The aside about rearrangements is adapted from Principles of Mathematical Analysis by W. Rudin, McGraw-Hill, 1964, pages 66-67. The speciﬁc example of the odd-odd-even rearrangement is from the article “A Class of Simple Rearrangements of the Alternating Harmonic Series” by Maxim Gilulua in the American Mathematical Monthly. The purely probabilistic lemmas and theorems are adapted from Probability by Leo Breiman, Addison-Wesley, 1968, pages 45-48. The analytic proof is from Statistical Independence in Probability, Analysis and Number Theory by M. Kac. The numerical example of the distribution of sums and the simulation of the sums is adapted from the article “Cosine Sums, Fourier Transforms, and Random Series”, by Kent Morrison in The American Mathematical Monthly.

_______________________________________________________________________________________________

### Algorithms, Sripts, Simulations

#### Scripts

R
1library(pracma)
2
3n <- 500
4xmax <- 4
5npts <- 201
6
7prodcos <-  Vectorize(function(x) {cumprod(cos(x/(1:n)))[n]})
8
9xaxis <- seq(-xmax, xmax, length=npts)
10
11dist <- c()
12
13for (omega in xaxis) {
14    invcosker <- function(x) {cos(omega*x)}
15    integrand <- function(x) {invcosker(x) * prodcos(x) }
16    dist <- c(dist, (1/pi) * quadinf( integrand, 0, Inf)\$Q)
17}
18
19plot(xaxis, dist, type="l")
2cAp1x1-1400020:
R
1k <- 150000 # is the number of trials, set to be columns
2n <- 1000 # is tne number of terms in the series, set as rows
3p <- 0.5 # is the probability of heads
4
5flips <- 2 * matrix((runif(n * k) <= p), n, k) - 1
6harser <- 1/seq(n)
7ranseq <- harser * flips
8ranser <- apply(ranseq, 2, sum)
9
10f <- hist(ranser, breaks = "FD", plot = TRUE, probability = TRUE)
11
12plot(f, freq=FALSE, xlab="", main="")
2cAp2x1-1400015:

__________________________________________________________________________

### Problems to Work for Understanding

1. In step 1 of Lemma 1, show that
$ℙ\left[|{S}_{N}|>\alpha ,{j}^{\ast }=j\right]\ge ℙ\left[|{S}_{N}-{S}_{j}|\le \alpha ,{j}^{\ast }=j\right].$

2. Give a careful and detailed proof of the corollary with the main conclusion about random series: For independent random variables ${X}_{1},{X}_{2},\dots$ with $𝔼\left[{X}_{k}\right]=0$ for all $k$ and $\sum _{k=1}^{\infty }𝔼\left[{X}_{k}^{2}\right]<\infty$, then $\sum _{k=1}^{\infty }{X}_{k}$ converges almost surely.
3. Show that $\sum _{k=1}^{\infty }{a}_{k}^{2}=\infty$ and ${a}_{k}\to 0$ imply
$\underset{n\to \infty }{lim}\prod _{k=1}^{n}cos\left(\xi {a}_{k}\right)=0.$

__________________________________________________________________________

### References

[1]   Leo Breiman. Probability. Addison Wesley, 1968.

[2]   Kai Lai Chung. A Course in Probability Theory. Academic Press, 1974.

[3]   Maxim Gilula. A class of simple rearrangements of the alternating harmonic series. The American Mathematical Monthly, 125(3):245–256, 2018.

[4]   Mark Kac. Statistical Independence in Probability, Analysis and Number Theory, volume 12 of The Carus Mathematical Monographs. Mathematical Association of America, 1959.

[5]   Kent Morrison. Cosine sums, fourier transforms, and random series. The American Mathematical Monthly, pages 716–724, October 1995.

[6]   H. L. Royden. Real Analysis. Macmillan, 1970.

[7]   Walter Rudin. Principles of Mathematical Analysis. McGraw-Hill, 1964.

[8]   Walter Rudin. Real and Complex Analysis. McGraw-Hill, 1974.

__________________________________________________________________________

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.