Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Topics in
Probability Theory and Stochastic Processes
Steven R. Dunbar

__________________________________________________________________________

Recurrence in Higher Dimensions

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

### Rating

Mathematicians Only: prolonged scenes of intense rigor.

_______________________________________________________________________________________________

### Section Starter Question

__________________________________________________________________________

### Key Concepts

1. Every centered nearest neighbor random walk on ${ℤ}^{2}$ is recurrent.
2. Every centered nearest neighbor random walk on ${ℤ}^{N}$ for $N\ge 3$ is transient.

__________________________________________________________________________

### Vocabulary

1. A random walk on the integer lattice ${ℤ}^{N}$ is
${\mathbf{T}}_{n}=\underset{k=1}{\overset{n}{\sum }}{\mathbf{Y}}_{i}.$

where ${\mathbf{Y}}_{k}\left(\omega \right) \in \left\{{\mathbf{e}}_{1},\dots ,{\mathbf{e}}_{m}\right\}\subseteq {ℤ}^{N}$ with a corresponding ﬁnite probability distribution $\left({p}_{1},\dots ,{p}_{m}\right)$ where ${p}_{1}+ \cdots +{p}_{m}= 1$.

2. The dimension of the random walk is the dimension of the subspace of ${ℝ}^{N}$ generated by the set of directions $\left({\mathbf{e}}_{1},\dots ,{\mathbf{e}}_{m}\right)$ that are the possible steps.
3. Let ${\mathbf{b}}_{1},\dots ,{\mathbf{b}}_{N}$ be the standard unit basis vectors for ${ℝ}^{N}$. The $2N$ possible steps $\left({\mathbf{e}}_{1},\dots ,{\mathbf{e}}_{2N}\right) = \left({\mathbf{b}}_{1},-{\mathbf{b}}_{1},\dots ,{\mathbf{b}}_{N},-{\mathbf{b}}_{N}\right)$ with a corresponding ﬁnite probability distribution $\left({p}_{1},\dots ,{p}_{2N}\right)$ deﬁnes the nearest neighbor random walk on ${ℝ}^{N}$.
4. In the special case that ${p}_{i}=\frac{1}{2N}$ for $i = 1,\dots , 2N$, the nearest neighbor random walk is the simple random walk in ${ℝ}^{N}$.
5. For each positive integer $n$ the Rodrigues formula for the (non-normalized) Legendre polynomial of degree $n$ is
${L}_{n}\left(x\right) =\frac{{\mathrm{d}}^{n}}{\mathrm{d}{x}^{n}}{\left({x}^{2}- 1\right)}^{n}.$

__________________________________________________________________________

### Mathematical Ideas

#### Random Walks on Integer Lattices

Let $\left\{{\mathbf{e}}_{1},\dots ,{\mathbf{e}}_{m}\right\}$ be a set of vectors in ${ℝ}^{N}$ with a corresponding ﬁnite probability distribution $\left({p}_{1},\dots ,{p}_{m}\right)$ where ${p}_{1}+ \cdots +{p}_{m}= 1$. The probability space associated to an elementary trial is ${\Omega }_{1}=\left\{1,\dots ,m\right\}$. The sample space for a sequence of independent trials of the experiment is ${\Omega }_{n}={\Omega }_{1}^{n}$ with the point probability

$ℙn\left[{\omega }^{\left(n\right)}\right]=\underset{k=1}{\overset{n}{\prod }}{p}_{{\omega }_{k}}=\underset{i=1}{\overset{m}{\prod }}{p}_{i}^{|\left\{k:{\omega }_{k}=i,1\le k\le n\right\}|}$

where ${\omega }^{\left(n\right)}= \left({\omega }_{1},\dots ,{\omega }_{n}\right) \in {\Omega }_{n}$. Then the random variable is ${\mathbf{Y}}_{i}\left(\omega \right) ={\mathbf{e}}_{{\omega }_{i}}$ for each $\omega \in {\Omega }_{n}$. Then a random walk on the integer lattice is the sequence of random variables ${\mathbf{T}}_{n}$ deﬁned by

${\mathbf{T}}_{n}=\underset{k=1}{\overset{n}{\sum }}{\mathbf{Y}}_{i}.$

A random walk is recurrent if the walk almost surely returns to the origin inﬁnitely many times. That is, the walk is recurrent if the event is an almost sure event. A random walk is transient if is a negligible event, alternatively, if the walk almost surely returns to the origin only ﬁnitely many times.

A random walk is centered if $𝔼\left[{\mathbf{Y}}_{i}\right]= 0$.

The dimension of the random walk is the dimension of the subspace of ${ℝ}^{N}$ generated by the set of directions $\left({\mathbf{e}}_{1},\dots ,{\mathbf{e}}_{m}\right)$ that are the possible steps.

Deﬁnition. Let $S$ be the semigroup generated under addition on the set $E =\left\{{\mathbf{e}}_{1},\dots ,{\mathbf{e}}_{m}\right\}$ of allowed steps in the random walk, where all probabilities in the associated distribution $\left({p}_{1},\dots ,{p}_{m}\right)$ are positive. That is, $S$ is the set of all elements in ${ℝ}^{N}$ that are a sum of integer multiples of elements of $E$. Alternatively, $S$ is the set of points attainable in the random walk.

The lemmas and theorems in the previous section about recurrence and transience carry over to higher dimensions with only minimal changes. This is because the proofs do not depend on the dimensionality of the state space. The proofs do depend on the probabilities of return to a speciﬁc value.

Deﬁnition. For $m,s,t \in ℕ$ with $s \le t$, let ${A}_{s,t}^{m}$ be the event consisting of the random walks that return to the starting point at least $m$ times between steps $s$ and $t$. That is,

Lemma 1. For every $m$ and $t$

$ℙ\left[{A}_{1,t}^{m}\right]\le {\left(ℙ\left[{A}_{1,t}^{1}\right]\right)}^{m}.$

Lemma 2. For every $m$ and $t$

${\left(ℙ\left[{A}_{1,t}^{1}\right]\right)}^{m}\le ℙ\left[{A}_{1,mt}^{m}\right].$

Theorem 3. The walk is recurrent, that is,

if and only if

Theorem 4 (Recurrence Criteria).

1. If
$\underset{n=1}{\overset{\infty }{\sum }}ℙ\left[{\mathbf{T}}_{n}= 0\right]< +\infty .$

then

that is, the walk is transient.

2. If
$\underset{n=1}{\overset{\infty }{\sum }}ℙ\left[{\mathbf{T}}_{n}= 0\right]= +\infty .$

then

that is, the walk is recurrent.

Corollary 1. Every random walk is either recurrent or transient.

Deﬁnition. For $\mathbf{x}\in {ℤ}^{N}$, the norm in the following theorem can be any norm, but for convenience, the proof uses the ${L}_{\infty }$-norm, $\parallel \mathbf{x}\parallel = max\left\{|{x}_{1}|,\dots ,|{x}_{N}|\right\}$.

Lemma 5.

1. The random walk ${\mathbf{T}}_{n}$ on ${ℤ}^{N}$ satisﬁes
$\underset{n=1}{\overset{\infty }{\sum }}ℙ\left[{\mathbf{T}}_{n}=\mathbf{x}\right]\le 1 +\underset{n=1}{\overset{\infty }{\sum }}ℙ\left[{\mathbf{T}}_{n}=\mathbf{0}\right]$

for every $\mathbf{x}\in {ℤ}^{N}$

2. $\underset{n=1}{\overset{\infty }{\sum }}ℙ\left[\parallel {\mathbf{T}}_{n}\parallel \le K\right]\le {\left(2K + 1\right)}^{N}\left(1 +\underset{n=1}{\overset{\infty }{\sum }}ℙ\left[{\mathbf{T}}_{n}=\mathbf{0}\right]\right)$

for every $K > 0$.

Theorem 6.

1. If the random walk is transient, then every point of $S$ is almost surely reached only ﬁnitely many times.
2. If the random walk is transient, then almost surely $\underset{n\to \infty }{lim}\parallel {\mathbf{T}}_{n}\parallel = +\infty$.
3. If the random walk is recurrent, then $S$ is a group.
4. If the random walk is recurrent, then the random walk almost surely reaches every point in $S$ inﬁnitely many times.

#### Random Walks in Higher Dimensions

Let ${\mathbf{Y}}_{n}= \left({Y}_{n}^{1},{Y}_{n}^{2},\dots ,{Y}_{n}^{N}\right)$ be the sequence of ordered $N$-tuples of the steps of the random walk, and ${\mathbf{T}}_{n}= \left({T}_{n}^{1},{T}_{n}^{2},\dots ,{T}_{n}^{N}\right)$ be the sequence of ordered $N$-tuples of the random walk itself.

If the random walk is not centered, then there exists at least one index $i$ with $1 \le i \le N$ such that $𝔼\left[{Y}_{n}^{i}\right]\ne 0$. Then the Strong Law of Large Numbers implies that $\underset{n\to \infty }{lim}|{T}_{n}^{i}| = \infty$ almost surely. Thus the full walk in ${ℝ}^{N}$ is transient.

Whether a centered random walk is recurrent depends on its dimension. The general result is that every centered walk in dimension at most $2$ is recurrent. On the other hand, every random walk in dimension $3$ or greater is transient. The proof of these results in full generality requires Fourier analysis, so the results here are only for nearest neighbor random walks.

In the integer lattice ${ℤ}^{N}$ each point has $2N$ nearest neighbors. Let ${\mathbf{b}}_{1},\dots ,{\mathbf{b}}_{N}$ be the standard unit basis vectors for ${ℝ}^{N}$. The $2N$ possible steps $\left({\mathbf{e}}_{1},\dots ,{\mathbf{e}}_{2N}\right) = \left({\mathbf{b}}_{1},-{\mathbf{b}}_{1},\dots ,{\mathbf{b}}_{N},-{\mathbf{b}}_{N}\right)$ deﬁne the nearest neighbor random walk on ${ℝ}^{N}$. The random walk deﬁned by these possible steps is centered if and only if ${p}_{2i-1}={p}_{2i}$ for $i = 1,\dots ,n$. Without loss of generality, assume that ${p}_{2i-1}$ and ${p}_{2i}$ are non-zero for each $i$, so that the dimension of the random walk is $N$. In the special case that ${p}_{i}=\frac{1}{2N}$ we say that this walk is the simple random walk in ${ℝ}^{N}$.

Proposition 7. Suppose ${\left({\mathbf{T}}_{n}\right)}_{n\ge 1}$ be a nearest neighbor random walk in ${ℤ}^{N}$.

1. $ℙ\left[{\mathbf{T}}_{2n-1}=\mathbf{0}\right]= 0$

2. $ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]=\underset{{k}_{1}+{k}_{2}+\cdots +{k}_{N}=n}{\sum }\frac{\left(2n\right)!}{{\left({k}_{1}!\phantom{\rule{0.3em}{0ex}}{k}_{2}!\phantom{\rule{0.3em}{0ex}}\dots {k}_{N}!\right)}^{2}}\underset{i=1}{\overset{N}{\prod }}{\left({p}_{2i-1}{p}_{2i}\right)}^{{k}_{i}}$

where each ${k}_{i}$ is a positive integer.

Proof.

1. If $\omega$ is an element of $\Omega$ that satisﬁes ${\mathbf{T}}_{m}\left(\omega \right) =\mathbf{0}$, then there exist integers ${k}_{1}$, ${k}_{2}$${k}_{N}$ such that the ﬁnite sequence of steps $\left({\mathbf{Y}}_{1}\left(\omega \right),{\mathbf{Y}}_{2}\left(\omega \right),\dots {\mathbf{Y}}_{m}\left(\omega \right)\right)$ takes the value $\mathbf{{b}_{i}}$ exactly ${k}_{i}$ times and the value $-\mathbf{{b}_{i}}$ exactly ${k}_{i}$ times for $1 \le i \le N$.
2. For such an $\omega$ to exist $m$ must be even, setting $m = 2n$, then ${k}_{1}+{k}_{2}+ \cdots +{k}_{N}= n$. This proves part 1.
3. Let the set of sets $\left\{{E}_{1},{F}_{1},{E}_{2},{F}_{2},\dots ,{E}_{N},{F}_{N}\right\}$ be a partition of the set $\left\{1,\dots , 2n\right\}$ such that $|{E}_{i}| = |{F}_{i}| ={k}_{i}$ for $1 \le i \le N$.
4. The event that the indices $j$ in ${E}_{i}$ are those satisfying ${\mathbf{Y}}_{j}=\mathbf{{b}_{j}}$ and the indices $j$ in ${F}_{i}$ are those satisfying ${\mathbf{Y}}_{j}=\mathbf{{b}_{j}}$ has probability ${\prod }_{i=1}^{N}{\left({p}_{2k-1}{p}_{2k}\right)}^{{k}_{1}}$, by independence.
5. To be absolutely explicit, for ﬁxed $\left({k}_{1},{k}_{2},\dots ,{k}_{N}\right)$ there are exactly $\begin{array}{c}\left(\genfrac{}{}{0.0pt}{}{2n}{{k}_{1}}\right)\cdot \left(\genfrac{}{}{0.0pt}{}{2n -{k}_{1}}{{k}_{1}}\right)\cdot \left(\genfrac{}{}{0.0pt}{}{2n - 2{k}_{1}}{{k}_{2}}\right)\cdot \left(\genfrac{}{}{0.0pt}{}{2n - 2{k}_{1}-{k}_{2}}{{k}_{2}}\right)\cdots \\ \cdot \left(\genfrac{}{}{0.0pt}{}{2n - 2{k}_{1}- 2{k}_{2}-\cdots - 2{k}_{N-1}}{{k}_{N}}\right)\end{array}$

ways of choosing the partition $\left\{{E}_{1},{F}_{1},{E}_{2},{F}_{2},\dots ,{E}_{N},{F}_{N}\right\}$. This product is the multinomial coeﬃcient

$\frac{\left(2n\right)!}{{\left({k}_{1}!\phantom{\rule{0.3em}{0ex}}{k}_{2}!\phantom{\rule{0.3em}{0ex}}\dots {k}_{N}!\right)}^{2}}$

6. Then summing over all possible disjoint partitions, it follows that
$ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]=\underset{{k}_{1}+{k}_{2}+\cdots +{k}_{N}=n}{\sum }\frac{\left(2n\right)!}{{\left({k}_{1}!\phantom{\rule{0.3em}{0ex}}{k}_{2}!\phantom{\rule{0.3em}{0ex}}\dots {k}_{N}!\right)}^{2}}\underset{i=1}{\overset{N}{\prod }}{\left({p}_{2i-1}{p}_{2i}\right)}^{{k}_{i}}$

The following proofs use classical properties of Legendre polynomials. For each positive integer $n$ the Rodrigues formula for the (non-normalized) Legendre polynomial of degree $n$ is

${L}_{n}\left(x\right) =\frac{{\mathrm{d}}^{n}}{\mathrm{d}{x}^{n}}{\left({x}^{2}- 1\right)}^{n}.$

The ﬁrst few of the (non-normalized) Legendre polynomials are ${L}_{0}\left(x\right) = 1$, ${L}_{1}\left(x\right) = 2x$, ${L}_{2}\left(x\right) = 12{x}^{2}- 4$, ${L}_{3}\left(x\right) = 120{x}^{3}- 72x$. The standard normalized deﬁnition of the Legendre polynomials is

${P}_{n}\left(x\right) =\frac{1}{{2}^{n}n!}\frac{{\mathrm{d}}^{n}}{\mathrm{d}{x}^{n}}{\left({x}^{2}- 1\right)}^{n}.$

Also note that the standard notation for the normalized Legendre polynomials is ${P}_{n}\left(x\right)$.

For each positive integer $n$ deﬁne

${g}_{n}=\frac{\left(2n\right)!}{{\left({2}^{n}\phantom{\rule{0.3em}{0ex}}n!\right)}^{2}}.$

Note that ${g}_{n}$ is the probability of exactly an equal number of heads and tails in $2n$ ﬂips of a fair coin. This fact is used later in the proofs, but here it shows that ${g}_{n}$ is a familiar quantity.

Lemma 8. For every positive integer $n$ and every nonzero real number $y$,

${L}_{n}\left(\frac{1}{2}\left(y +\frac{1}{y}\right)\right)={2}^{n}n!\underset{k=0}{\overset{n}{\sum }}{g}_{k}{g}_{n-k}{y}^{2k-n}.$

Proof.

1. The binomial expansion is
${\left({x}^{2}- 1\right)}^{n}=\underset{k=0}{\overset{n}{\sum }}{\left(-1\right)}^{k}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){x}^{2\left(n-k\right)}.$

2. Diﬀerentiating $n$ times term by term gives
${L}_{n}\left(x\right) = n!\underset{k=0}{\overset{⌊n∕2⌋}{\sum }}{\left(-1\right)}^{k}\frac{\left(2\left(n - k\right)\right)!}{k!\phantom{\rule{0.3em}{0ex}}\left(n - k\right)!\phantom{\rule{0.3em}{0ex}}\left(n - 2k\right)!}{x}^{n-2k}.$

3. Use the binomial theorem for the series expansion of ${\left(1 - u\right)}^{-1∕2}$
${\left(1 - u\right)}^{-1∕2}= 1 +\frac{1}{2}u +\frac{1}{2!}\cdot \frac{1}{2}\cdot \frac{3}{2}{u}^{2}+\frac{1}{3!}\cdot \frac{1}{2}\cdot \frac{3}{2}\cdot \frac{5}{2}{u}^{3}+ \dots .$

With mathematical induction on the coeﬃcients this is a way to show that the Taylor series for ${f}_{1}\left(u\right) ={\left(1 - u\right)}^{-1∕2}$ about $0$ is

$\frac{1}{\sqrt{1 - u}}=\underset{n=0}{\overset{\infty }{\sum }}{g}_{n}{u}^{n}.$

4. Set $u = 2tx -{t}^{2}$ to see that $\begin{array}{c}{\left(1 - 2xt +{t}^{2}\right)}^{-1∕2}= 1 +\frac{1}{2}\left(2xt -{t}^{2}\right) +\frac{1}{2!}\cdot \frac{1}{2}\cdot \frac{3}{2}{\left(2xt -{t}^{2}\right)}^{2}+\\ \frac{1}{3!}\cdot \frac{1}{2}\cdot \frac{3}{2}\cdot \frac{5}{2}{\left(2xt -{t}^{2}\right)}^{3}+ \dots .\end{array}$

5. Expand powers and collect in powers of $t$ $\begin{array}{c}{\left(1 - 2xt +{t}^{2}\right)}^{-1∕2}= 1 + xt +\frac{1}{2}\left(3{x}^{2}- 1\right){t}^{2}+ \left(5{x}^{3}-\frac{3x}{2}\right){t}^{3}+\\ \left(\frac{3}{8}-\frac{15{x}^{2}}{4}\right){t}^{4}+\frac{15x}{8}{t}^{5}-\frac{5}{16}{t}^{6}+ \cdots \phantom{\rule{0.3em}{0ex}}.\end{array}$

Since ${L}_{0}\left(x\right) = 1$, ${L}_{1}\left(x\right) = 2x$, ${L}_{2}\left(x\right) = 12{x}^{2}- 4$ and ${L}_{3}\left(x\right) = 120{x}^{3}- 72x$, it becomes apparent that

$\frac{1}{\sqrt{1 - 2xt +{t}^{2}}}=\underset{n=0}{\overset{\infty }{\sum }}\frac{1}{{2}^{n}n!}{L}_{n}\left(x\right){t}^{n}.$

6. Let $y\ne 0$ and $x =\frac{1}{2}\left(y +\frac{1}{y}\right)$. Then
$1 - 2xt +{t}^{2}= \left(1 - yt\right)\left(1 -\frac{1}{y}t\right)$

so

$\frac{1}{\sqrt{1 - 2xt +{t}^{2}}}=\frac{1}{\sqrt{1 - yt}}\frac{1}{\sqrt{1 -\frac{1}{y}t}}=\left(\underset{n=0}{\overset{\infty }{\sum }}{g}_{n}{y}^{n}{t}^{n}\right)\left(\underset{n=0}{\overset{\infty }{\sum }}{g}_{n}{y}^{-n}{t}^{n}\right).$

7. Then multiplying the series, gathering powers of $t$, and comparing coeﬃcients
$\frac{{2}^{n}n!}{{L}_{n}\left(x\right)}=\underset{k=0}{\overset{n}{\sum }}{g}_{k}{g}_{n-k}{y}^{2k-n}.$

Remark.

Here is another view of the generating function for Legendre polynomials in step 5. The origin of the Legendre polynomials is in the treatment of potential problems from gravity and electrostatics. For example, the potential at $\mathbf{r}$ due to a point charge ${q}^{\prime }$ located at the point ${\mathbf{r}}^{\prime }$, assuming $r >{r}^{\prime }$, is given by

$\begin{array}{llll}\hfill \Phi \left(\mathbf{r}\right)& =\frac{1}{4\pi {𝜖}_{0}}\left(\frac{{q}^{\prime }}{|\mathbf{r}-\mathbf{{r}^{\prime }}|}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{4\pi {𝜖}_{0}}\left[\frac{{q}^{\prime }}{{\left({r}^{2}- 2\mathbf{r}\cdot \mathbf{{r}^{\prime }}+{r}^{2}\right)}^{1∕2}}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{{q}^{\prime }}{4\pi {𝜖}_{0}r}\left[\frac{1}{{\left(1 - 2\left({r}^{\prime }∕r\right) cos𝜃 +{\left({r}^{\prime }∕r\right)}^{2}\right)}^{1∕2}}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Let $cos𝜃 = x$ and ${r}^{\prime }∕r = t$ to express the potential as

$\Phi \left(\mathbf{r}\right) =\frac{{q}^{\prime }}{4\pi {𝜖}_{0}r}{\left(1 - 2xt +{t}^{2}\right)}^{-1∕2}.$

The expansion of the electrostatic potential in spherical harmonics written with Legendre polynomials is

$\Phi \left(\mathbf{r}\right) =\frac{{q}^{\prime }}{4\pi {𝜖}_{0}r}\underset{n=0}{\overset{\infty }{\sum }}{P}_{n}\left(x\right){t}^{2}.$

In physics, the spherical harmonics are also called the monopole moment, dipole moment, quadrupole moment, and so on, leading to the multipole expansion. Comparing these expressions for the potential,

$\frac{1}{\sqrt{1 - 2xt +{t}^{2}}}=\underset{n=0}{\overset{\infty }{\sum }}\frac{1}{{2}^{n}n!}{L}_{n}\left(x\right){t}^{n}.$

This is the generating function in step 5 in the proof of Lemma 8.

Lemma 9. The two-dimensional centered random walk ${\left({\mathbf{T}}_{n}\right)}_{n=1}^{\infty }$ satisﬁes

$ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]={g}_{n}\underset{k=0}{\overset{n}{\sum }}{g}_{k}{g}_{n-k}{\left(4p - 1\right)}^{2k}$

for every $n$.

Remark. Note that if $p = 1∕4$ and $q = 1∕4$, the probability of return to the origin in $n$ steps is the just the ﬁrst term of the sum. Speciﬁcally $ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]={g}_{n}\left({g}_{0}{g}_{n}\right) ={g}_{n}^{2}$. For small values of $n$, this is easy to conﬁrm. For example, let $L = \left(1, 0\right)$, $R = \left(-1, 0\right)$, $U = \left(0, 1\right)$, and $D = \left(0,-1\right)$. Return to the origin in $2$ steps can occur in $4$ ways, $RL$, $LR$, $UD$ and $DU$, each with probability $1∕16$. The total probability of return to the origin in $2$ steps is $4∕16 = 1∕4$. On the other hand, ${g}_{1}= 1∕2$ so ${g}_{1}^{2}= 1∕4$.

Return to the origin in $4$ steps can occur in several ways. The walk can take steps $L$, $R$, $L$ and $R$ in $\frac{4!}{2!\phantom{\rule{0.3em}{0ex}}2!}= 6$ ways. The walk can take steps $U$, $D$, $U$ and $D$ in $6$ ways. The walk can take steps $R$, $L$, $U$, $D$ in $4! = 24$. This makes a total of $36$ paths with return to the origin, each with probability ${\left(1∕4\right)}^{4}= 1∕256$ for a total probability $9∕64$. On the other hand, ${g}_{2}= 3∕8$ so ${g}_{2}^{2}= 9∕64$.

The proof below uses polynomial expansions to keep track of the possible combinations in longer collections of steps returning to the origin. Then the Rodrigues deﬁnition of the Legendre polynomials arises naturally to express the probability of return to the origin.

Proof.

1. Since the two-dimensional random walk is centered, ${p}_{1}={p}_{2}$ is the probability that the walk steps in one direction and ${p}_{3}={p}_{4}=\frac{1}{2}-{p}_{1}$ is the probability that the walk steps in the complementary direction. Set ${p}_{1}= p$ and ${p}_{3}= q$. (Note that this is not the frequent use of $p$ and $q$ for the complementary probabilities of heads and tails in a coin ﬂip.)
2. Lemma 8 shows that $\begin{array}{llll}\hfill ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]& =\underset{k=0}{\overset{n}{\sum }}\frac{\left(2n\right)!}{{\left(k!\phantom{\rule{0.3em}{0ex}}\left(n - k\right)!\right)}^{2}}{p}^{2k}{q}^{2\left(n-k\right)}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={2}^{2n}{g}_{n}\underset{k=0}{\overset{n}{\sum }}{\left(\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)\right)}^{2}{p}^{2k}{q}^{2\left(n-k\right)}.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
3. From binomial expansions
${\left({p}^{2}+ x\right)}^{n}=\underset{k=0}{\overset{n}{\sum }}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p}^{2k}{x}^{n-k}$

and

${\left(x +{q}^{2}\right)}^{n}=\underset{k=0}{\overset{n}{\sum }}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){x}^{k}{q}^{2\left(n-k\right)}.$

4. Comparing these expansions, see that $ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]$ is the coeﬃcient of ${x}^{n}$ in the polynomial
$Q\left(x\right) ={2}^{2n}{g}_{n}{\left({p}^{2}+ x\right)}^{n}{\left(x +{q}^{2}\right)}^{n}.$

5. In the case that $p = q =\frac{1}{4}$
$ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]={2}^{2n}{g}_{n}\left(\genfrac{}{}{0.0pt}{}{2n}{n}\right){\left(\frac{1}{4}\right)}^{2n}={g}_{n}^{2}$

using the binomial identity

$\underset{k=0}{\overset{n}{\sum }}{\left(\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)\right)}^{2}=\left(\genfrac{}{}{0.0pt}{}{2n}{n}\right).$

6. In the case that $p\ne q$, $Q\left(x\right) ={2}^{2n}{g}_{n}{\left({p}^{2}{q}^{2}+ \left({p}^{2}+{q}^{2}\right)x +{x}^{2}\right)}^{n}$ and using step 4 that $ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]$ is the coeﬃcient of ${x}^{n}$ in the polynomial
$Q\left(x\right) ={2}^{2n}{g}_{n}{\left({p}^{2}+ x\right)}^{n}{\left(x +{q}^{2}\right)}^{n}$

then

$ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]=\frac{1}{n!}{Q}^{\left(n\right)}\left(0\right).$

7. Completing the square on $\left({p}^{2}{q}^{2}+ \left({p}^{2}+{q}^{2}\right)x +{x}^{2}\right)$, rewrite $Q\left(x\right)$ as
$Q\left(x\right) ={2}^{2n}{g}_{n}{\left(\frac{{p}^{2}-{q}^{2}}{2}\right)}^{2n}{\left({\left(\frac{2x +{p}^{2}+{q}^{2}}{{p}^{2}-{q}^{2}}\right)}^{2}- 1\right)}^{n}.$

8. Using the variable ${x}^{\prime }=\frac{2x+{p}^{2}+{q}^{2}}{{p}^{2}-{q}^{2}}$, and the Rodrigues formula deﬁnition of Legendre polynomials, then
${Q}^{\left(n\right)}\left(0\right) ={2}^{2n}{g}_{n}{\left(\frac{{p}^{2}-{q}^{2}}{2}\right)}^{2}{L}_{n}\left(\frac{{p}^{2}+{q}^{2}}{{p}^{2}-{q}^{2}}\right).$

9. Since $q =\frac{1}{2}- p$, write this as
${Q}^{\left(n\right)}\left(0\right) ={2}^{2n}{g}_{n}{\left(\frac{4p - 1}{8}\right)}^{n}{L}_{n}\left(\frac{1}{2}\left(4p - 1 +\frac{1}{4p - 1}\right)\right).$

10. Finally, using Lemma 8
${Q}^{\left(n\right)}\left(0\right) = n!{g}_{n}\underset{k=0}{\overset{n}{\sum }}{g}_{k}{g}_{n-k}{\left(4p - 1\right)}^{2k}.$

Thus

$ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]={g}_{n}\underset{k=0}{\overset{n}{\sum }}{g}_{k}{g}_{n-k}{\left(4p - 1\right)}^{2k}$

for every $n$.

Theorem 10. Every centered nearest neighbor random walk on ${ℤ}^{2}$ is recurrent.

Remark. The strategy of the proof is to use the formula for the probability of the uniform centered random walk with $p = 1∕4$ returning to the origin to show that the uniform centered random walk is recurrent. Since the probability of returning when $p\ne 1∕4$ is less than when $p = 1∕4$, this implies that all centered random walks in ${ℤ}^{2}$ are recurrent.

Proof.

1. Take $p = 1∕4$. By Lemma 9
$ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]={g}_{n}^{2}=\frac{{\left(\left(2n\right)!\right)}^{2}}{{\left({2}^{n}n!\right)}^{4}}.$

2. Note that ${g}_{n}$ is the probability that a simple one-dimensional random walk returns to the origin after $2n$ steps.
3. Estimate ${g}_{n}$ with Stirling’s Formula to obtain
${g}_{n}\sim \frac{1}{\sqrt{n\pi }}.$

4. Thus
$\underset{n=1}{\overset{\infty }{\sum }}ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]= \infty$

and by part 2 of Theorem 4 the walk is recurrent.

Theorem 11. Every centered nearest neighbor random walk on ${ℤ}^{N}$ for $N \ge 3$ is transient.

Remark. The overall strategy is to show that the centered random walk in $N$ dimensions, where $N \ge 3$, has

$ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]= O\left({n}^{-N∕2}\right).$

This implies that $\underset{n}{\sum }ℙ\left[{\mathbf{T}}_{n}=\mathbf{0}\right]$ converges so by part 1 of Theorem 4 the walk is transient. To be speciﬁc, this proof is just for $N = 3$. Since higher dimensional lattices contain ${ℤ}^{3}$, random walks in ${ℤ}^{N}$ are also transient. The steps in the overall strategy are to ﬁrst prove some inequalities about binomial coeﬃcients and factorials. Then break the sum expressing the probability of return to the origin into $4$ parts. The ﬁrst $3$ parts express the contributions where the number of steps in each of the three coordinate directions is “far” from $np$, $nq$,and $nr$ respectively. Using the Large Deviations Estimate, these three parts are each exponentially small. The fourth sum is the probability that the number of steps in each of the three coordinate directions is “approximately” $np$, $nq$, and $nr$. Using the binomial estimates, this probability is of order ${n}^{-3∕2}$. Then the sum of probabilities of returns to the origin is a convergent series and the walk is transient.

Proof.

1. Begin with the observation that $\frac{1}{{\left(n!\right)}^{2}}\le \frac{{2}^{2n}}{\left(2n\right)!}$. This follows from the binomial expansion estimate ${\left(1 + 1\right)}^{2n}\ge \left(\genfrac{}{}{0.0pt}{}{2n}{n}\right)$.
2. Also use the estimate that there is a $c > 0$ such that
 $\frac{1}{{\left(n!\right)}^{2}}\le c\frac{{2}^{2n}}{\sqrt{n}\left(2n\right)!}$ (1)

for every $n \ge 0$ This follows from ${g}_{n}\sim \frac{1}{\sqrt{n\pi }}$ already used above in step 3 in the proof of recurrence for the random walk in ${ℤ}^{2}$.

3. Let the step probabilities in the centered random walk in ${ℤ}^{3}$ be $p ={p}_{1}={p}_{2}$, $q ={p}_{3}={p}_{4}$, $r ={p}_{5}={p}_{6}$, with $p,q,r > 0$ and $p + q + r = 1∕2$. Using Proposition 7
$ℙ\left[{\mathbf{T}}_{2n}=\mathbf{0}\right]=\underset{i+j+k=n}{\sum }\frac{\left(2n\right)!}{{\left(i!\phantom{\rule{0.3em}{0ex}}j!\phantom{\rule{0.3em}{0ex}}k!\right)}^{2}}{p}^{2i}{q}^{2j}{r}^{2k}.$

4. Make the following deﬁnitions:
1. $I\left(p,n\right) =\left\{i\phantom{\rule{0.3em}{0ex}}:\phantom{\rule{0.3em}{0ex}}0 \le i \le n,\left|p -\frac{i}{2n}\right|<\frac{p}{2}\right\}.$

An equivalent deﬁnition is

 $I\left(p,n\right) =\left\{i\phantom{\rule{0.3em}{0ex}}:\phantom{\rule{0.3em}{0ex}}0 \le i \le n,pn < i < 3pn\right\}.$ (2)

See Figure 2 for an illustration of the ﬁrst deﬁnition.

2. ${A}_{p}=\underset{\begin{array}{c}i+j+k=n,\\ i\notin I\left(p,n\right)\end{array}}{\sum }\frac{\left(2n\right)!}{{\left(i!\phantom{\rule{0.3em}{0ex}}j!\phantom{\rule{0.3em}{0ex}}k!\right)}^{2}}{p}^{2i}{q}^{2j}{r}^{2k}$

3.  $B =\underset{\begin{array}{c}i+j+k=n,\\ i\in I\left(p,n\right),j\in I\left(q,n\right),k\in I\left(r,n\right)\end{array}}{\sum }\frac{\left(2n\right)!}{{\left(i!\phantom{\rule{0.3em}{0ex}}j!\phantom{\rule{0.3em}{0ex}}k!\right)}^{2}}{p}^{2i}{q}^{2j}{r}^{2k}$ (3)
5. By step 1 $\begin{array}{lll}\hfill {A}_{p}\le \underset{\begin{array}{c}i+j+k=n\\ i\notin I\left(p,n\right)\end{array}}{\sum }\frac{\left(2n\right)!\phantom{\rule{0.3em}{0ex}}{2}^{2n}}{\left(2i\right)!\phantom{\rule{0.3em}{0ex}}\left(2j\right)!\phantom{\rule{0.3em}{0ex}}\left(2k\right)!}{p}^{2i}{q}^{2j}{r}^{2k}.& \phantom{\rule{2em}{0ex}}& \hfill \end{array}$
6. Rewriting the sum in the inequality
${A}_{p}\le \underset{\begin{array}{c}0\le i\le n\\ i\notin I\left(p,n\right)\end{array}}{\sum }\frac{\left(2n\right)!\phantom{\rule{0.3em}{0ex}}{2}^{2n}}{\left(2i\right)!}{p}^{2i}\underset{j=0}{\overset{n}{\sum }}\frac{1}{\left(2j\right)!\phantom{\rule{0.3em}{0ex}}\left(\left(2\left(n - i - j\right)\right)!\right)}{q}^{2j}{r}^{2\left(n-i-j\right)}.$

7. By the binomial expansion,
$\underset{j=0}{\overset{n}{\sum }}\frac{1}{\left(2j\right)!\phantom{\rule{0.3em}{0ex}}\left(\left(2\left(n - i - j\right)\right)!\right)}{q}^{2j}{r}^{2\left(n-i-j\right)}=\frac{1}{\left(2\left(n - i\right)\right)!}{\left(q + r\right)}^{2\left(n-i\right)}.$

8. Combine the last two expressions, express the factorials as a binomial coeﬃcient, and factor the powers of $2$ through to rewrite the inequality as
 ${A}_{p}\le \underset{\begin{array}{c}0\le i\le n\\ i\notin I\left(p,n\right)\end{array}}{\sum }\left(\genfrac{}{}{0.0pt}{}{2n}{2i}\right){\left(2p\right)}^{2i}{\left(2\left(q+r\right)\right)}^{2\left(n-i\right)}.$ (4)
9. When $i\notin I\left(p,n\right)$ we have $\left|\frac{2i}{2n}- 2p\right|\ge p$ Therefore, the upper bound for ${A}_{p}$ is a probability controlled by the large deviations estimate: Let ${S}_{n}^{\prime }$ be a binomial random variable with parameters $n$ and $2p$. Then (4) becomes
${A}_{p}\le ℙ\left[\left|2p -\frac{{S}_{n}^{\prime }}{2n}\right|\ge p\right].$

10. By the Large Deviations Estimate,
${A}_{p}\le {\mathrm{e}}^{-nd}$

for a constant $d > 0$. Likewise the sums ${A}_{q}$ and ${A}_{r}$ of

$\frac{\left(2n\right)!}{{\left(i!\phantom{\rule{0.3em}{0ex}}j!\phantom{\rule{0.3em}{0ex}}k!\right)}^{2}}{p}^{2i}{q}^{2j}{r}^{2k}$

for $j\notin I\left(q,n\right)$ and $k\notin I\left(r,n\right)$ are also bounded above by exponentially small quantities.

11. Combining the exponential inequalities for $i\notin I\left(p,n\right)$, $j\notin I\left(q,n\right)$ and $k\notin I\left(r,n\right)$ leaves only estimating the quantity $B$ where the indices $i$, $j$, $k$ satisfy $i > np$, $j > nq$, and $k > nr$ respectively. By (1), (??), (??)
$B \le \underset{i+j+k=n}{\sum }\left(2n\right)!\frac{c{2}^{2i}}{\sqrt{i}\left(2i\right)!}\frac{c{2}^{2j}}{\sqrt{j}\left(2j\right)!}\frac{c{2}^{2k}}{\sqrt{k}\left(2k\right)!}{p}^{2i}{q}^{2j}{r}^{2k}$

so

$B \le \frac{{c}^{3}}{\sqrt{pqr}{n}^{3∕2}}\underset{\begin{array}{c}i+j+k=n\\ i\notin I\left(p,n\right),j\notin I\left(q,n\right),k\notin I\left(r,n\right)\end{array}}{\sum }\frac{\left(2n\right)!}{\left(2i\right)!\phantom{\rule{0.3em}{0ex}}\left(2j\right)!\phantom{\rule{0.3em}{0ex}}\left(2k\right)!}{\left(2p\right)}^{2i}{\left(2q\right)}^{2j}{\left(2r\right)}^{2k}.$

12. By the multinomial expansion
$\underset{{i}^{\prime }+{j}^{\prime }+{k}^{\prime }=n}{\sum }\frac{\left(2n\right)!}{\left({i}^{\prime }\right)!\phantom{\rule{0.3em}{0ex}}\left({j}^{\prime }\right)!\phantom{\rule{0.3em}{0ex}}\left({k}^{\prime }\right)!}{\left(2p\right)}^{{i}^{\prime }}{\left(2q\right)}^{{j}^{\prime }}{\left(2r\right)}^{{k}^{\prime }}={\left(2p + 2q + 2r\right)}^{2n}= 1.$

13. Hence
$B \le \frac{{c}^{3}}{\sqrt{pqr}{n}^{3∕2}}$

which is enough to complete the proof of transience using the convergence criterion 1 of the Recurrence Criteria, Theorem 4

#### Application

Consider $N$ simultaneous fair coin-tossing games carried on independently of each other. In terms of random variables, the games are

$\begin{array}{llll}\hfill & {Y}_{1}^{\left(1\right)},{Y}_{2}^{\left(1\right)},\dots \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & {Y}_{1}^{\left(2\right)},{Y}_{2}^{\left(2\right)},\dots \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ⋮\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & {Y}_{1}^{\left(N\right)},{Y}_{2}^{\left(N\right)},\dots \phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

where each ${Y}_{i}^{\left(j\right)}= ±1$ is the $i$th Heads or Tails outcome in the $j$th game. Let ${T}_{n}^{\left(j\right)}=\sum _{i=1}^{j}{Y}_{i}^{\left(j\right)}$ be the cumulative fortune in the $j$th game. Consider the $N$-tuple ${\mathbf{T}}_{n}= \left({T}_{n}^{\left(1\right)},\dots {T}_{n}^{\left(N\right)}\right)$ of fortunes at time $n$. Then ${\mathbf{T}}_{n}=\mathbf{0}$ if and only equalization takes place in all $N$ games simultaneously. Then

#### Sources

This section is adapted from: E. Lesigne, Heads or Tails: An Introduction to Limit Theorems in Probability, Chapter 13, American Mathematical Society, Student Mathematical Library, Volume 28, 2005. The remark on the connection of the generating function for the Legendre polynomials to potential for a charge is adapted from The Legendre Polynomials.. The application is from L. Breiman, Probability, Section 3.7, Problem 18, page 58.

_______________________________________________________________________________________________

### Algorithms, Scripts, Simulations

#### Scripts

__________________________________________________________________________

### Problems to Work for Understanding

1. Prove Lemma 1 in ${ℝ}^{n}$.
2. Prove Lemma 2 in ${ℝ}^{n}$.
3. Prove Theorem 3 in ${ℝ}^{n}$.
4. Prove part 1 of Lemma 5 in ${ℝ}^{n}$.
5. Prove part 2 of Lemma 5 in ${ℝ}^{n}$.
6. Prove part 1 of Lemma 6 in ${ℝ}^{n}$.
7. Prove part 2 of Lemma 6 in ${ℝ}^{n}$.
8. Prove part 3 of Lemma 6 in ${ℝ}^{n}$.
9. Prove part 4 of Lemma 6 in ${ℝ}^{n}$.
10. Show by examples for $n = 3$ and $4$ that diﬀerentiating the Rodrigues deﬁnition of Legendre polynomials $n$ times term by term gives
${L}_{n}\left(x\right) = n!\underset{k=0}{\overset{⌊n∕2⌋}{\sum }}{\left(-1\right)}^{k}\frac{\left(2\left(n - k\right)\right)!}{k!\phantom{\rule{0.3em}{0ex}}\left(n - k\right)!\phantom{\rule{0.3em}{0ex}}\left(n - 2k\right)!}{x}^{n-2k}.$

11. Show that the leading terms agree in both representations of the Legendre polynomials:
${L}_{n}\left(x\right) =\frac{{\mathrm{d}}^{n}}{\mathrm{d}{x}^{n}}{\left({x}^{2}- 1\right)}^{n}$

and

${L}_{n}\left(x\right) = n!\underset{k=0}{\overset{⌊n∕2⌋}{\sum }}{\left(-1\right)}^{k}\frac{\left(2\left(n - k\right)\right)!}{k!\phantom{\rule{0.3em}{0ex}}\left(n - k\right)!\phantom{\rule{0.3em}{0ex}}\left(n - 2k\right)!}{x}^{n-2k}.$

12. For $n$ even, show that the ﬁnal constant terms agree in both representations of the Legendre polynomials:
${L}_{n}\left(x\right) =\frac{{\mathrm{d}}^{n}}{\mathrm{d}{x}^{n}}{\left({x}^{2}- 1\right)}^{n}$

and

${L}_{n}\left(x\right) = n!\underset{k=0}{\overset{⌊n∕2⌋}{\sum }}{\left(-1\right)}^{k}\frac{\left(2\left(n - k\right)\right)!}{k!\phantom{\rule{0.3em}{0ex}}\left(n - k\right)!\phantom{\rule{0.3em}{0ex}}\left(n - 2k\right)!}{x}^{n-2k}.$

13. Show that the Taylor series for ${f}_{1}\left(u\right) ={\left(1 - u\right)}^{-1∕2}$ about $0$ is
$\frac{1}{\sqrt{1 - u}}=\underset{n=0}{\overset{\infty }{\sum }}{g}_{n}{u}^{n}.$

14. Prove the binomial identity
$\underset{k=0}{\overset{n}{\sum }}{\left(\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)\right)}^{2}=\left(\genfrac{}{}{0.0pt}{}{2n}{n}\right).$

15. Show that completing the square on $\left({p}^{2}{q}^{2}+ \left({p}^{2}+{q}^{2}\right)x +{x}^{2}\right)$, allows rewriting $Q\left(x\right)$ as
$Q\left(x\right) ={2}^{2n}{g}_{n}{\left(\frac{{p}^{2}-{q}^{2}}{2}\right)}^{2n}{\left({\left(\frac{2x +{p}^{2}+{q}^{2}}{{p}^{2}-{q}^{2}}\right)}^{2}- 1\right)}^{n}.$

16. Estimate ${g}_{n}$ with Stirling’s Formula to obtain
${g}_{n}\sim \frac{1}{\sqrt{n\pi }}.$

__________________________________________________________________________

### References

[1]   Emmanuel Lesigne. Heads or Tails: An Introduction to Limit Theorems in Probability, volume 28 of Student Mathematical Library. American Mathematical Society, 2005.

__________________________________________________________________________

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.