Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and

__________________________________________________________________________

Quadratic Variation of the Wiener Process

_______________________________________________________________________

_______________________________________________________________________

### Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

### Section Starter Question

What is an example of a function that varies a lot? What is an example of a function that does not vary a lot? How would you measure the variation of a function?

_______________________________________________________________________________________________

### Key Concepts

1.
The total quadratic variation of the Wiener Process on $\left[0,t\right]$ is $t$.
2.
This fact has profound consequences for dealing with the Wiener Process analytically and ultimately will lead to Itô’s formula.

__________________________________________________________________________

### Vocabulary

1.
A function $f\left(t\right)$ is said to have bounded variation on $\left[a,b\right]$ if there exists an $M$ such that
$|f\left({t}_{1}\right)-f\left(a\right)|+|f\left({t}_{2}\right)-f\left({t}_{1}\right)|+\cdots +|f\left(b\right)-f\left({t}_{n}\right)|\le M$

for all partitions $a={t}_{0}<{t}_{1}<{t}_{2}<\dots <{t}_{n}<{t}_{n+1}=b$ of the interval.

2.
A function $f\left(t\right)$ is said to have quadratic variation on $\left[a,b\right]$ if there exists an $M$ such that
${\left(f\left({t}_{1}\right)-f\left(a\right)\right)}^{2}+{\left(f\left({t}_{2}\right)-f\left({t}_{1}\right)\right)}^{2}+\cdots +{\left(f\left(b\right)-f\left({t}_{n}\right)\right)}^{2}\le M$

for all partitions $a={t}_{0}<{t}_{1}<{t}_{2}<\dots <{t}_{n}<{t}_{n+1}=b$ of the interval.

3.
The mesh size of a partition $P$ with $a={t}_{0}<{t}_{1}<\dots <{t}_{n}<{t}_{n+1}=b$ is $\underset{j=0,\dots ,n}{max}\left\{{t}_{j+1}-{t}_{j}\phantom{\rule{0.3em}{0ex}}|\phantom{\rule{0.3em}{0ex}}j=0,\dots ,n\right\}$.
4.
The total quadratic variation of a function $f$ on an interval $\left[a,b\right]$ is
$\underset{P}{sup}\sum _{j=0}^{n}{\left(f\left({t}_{j+1}\right)-f\left({t}_{j}\right)\right)}^{2}$

where the supremum is taken over all partitions $P$ with $a={t}_{0}<{t}_{1}<\dots <{t}_{n}<{t}_{n+1}=b$, with mesh size going to zero as the number of partition points $n$ goes to inﬁnity.

__________________________________________________________________________

### Mathematical Ideas

#### Variation

Deﬁnition. A function $f\left(x\right)$ is said to have bounded variation on the closed interval $\left[a,b\right]$ if there exists an $M$ such that

$|f\left({t}_{1}\right)-f\left(a\right)|+|f\left({t}_{2}\right)-f\left({t}_{1}\right)|+\cdots +|f\left(b\right)-f\left({t}_{n}\right)|\le M$

for all partitions $a={t}_{0}<{t}_{1}<{t}_{2}<\dots <{t}_{n}<{t}_{n+1}=b$ of the interval.

The idea is that we measure the total (hence the absolute value) up-and-down movement of a function. This deﬁnition is similar to other partition-based deﬁnitions such as the Riemann integral and the arc-length of the graph of the function. A monotone increasing or decreasing function has bounded variation. A function with a continuous derivative has bounded variation. Some functions, for instance the Wiener Process, do not have bounded variation.

Deﬁnition. A function $f\left(t\right)$ is said to have quadratic variation on the closed interval $\left[a,b\right]$ if there exists an $M$ such that

${\left(f\left({t}_{1}\right)-f\left(a\right)\right)}^{2}+{\left(f\left({t}_{2}\right)-f\left({t}_{1}\right)\right)}^{2}+\cdots +{\left(f\left(b\right)-f\left({t}_{n}\right)\right)}^{2}\le M$

for all partitions $a={t}_{0}<{t}_{1}<{t}_{2}<\dots <{t}_{n}<{t}_{n+1}=b$ of the interval.

Again, the idea is that we measure the total (hence the positive terms created by squaring) up-and-down movement of a function. However, the squaring will make small ups-and-downs smaller, so that a function without bounded variation might have quadratic variation. In fact, this is the case for the Wiener Process.

Deﬁnition. The total quadratic variation $Q$ of a function $f$ on an interval $\left[a,b\right]$ is

$Q=\underset{P}{sup}\sum _{i=0}^{n}{\left(f\left({t}_{i+1}\right)-f\left({t}_{i}\right)\right)}^{2}$

where the supremum is taken over all partitions $P$ with $a={t}_{0}<{t}_{1}<\dots <{t}_{n}<{t}_{n+1}=b$, with mesh size going to zero as the number of partition points $n$ goes to inﬁnity.

#### Quadratic Variation of the Wiener Process

We can guess that the Wiener Process might have quadratic variation by considering the quadratic variation of the approximation using a coin-ﬂipping fortune. Consider the piecewise linear function ${Ŵ}_{N}\left(t\right)$ on $\left[0,1\right]$ deﬁned by the sequence of sums $\left(1∕\sqrt{N}\right){T}_{n}=\left(1∕\sqrt{N}\right){Y}_{1}+\cdots +\left(1∕\sqrt{N}\right){Y}_{n}$ from the Bernoulli random variables ${Y}_{i}=+1$ with probability $p=1∕2$ and ${Y}_{i}=-1$ with probability $q=1-p=1∕2$. With some analysis, it is possible to show that we need only consider the quadratic variation at the node points. Then each term ${\left({Ŵ}_{N}\left(\left(i+1\right)∕N\right)-{Ŵ}_{N}\left(i∕N\right)\right)}^{2}=\left(1∕N\right){Y}_{i+1}^{2}=1∕N$. Therefore, the quadratic variation using the total number of steps is $Q=\left(1∕N\right)\cdot N=1$. Now remembering the Wiener Process is approximated by ${Ŵ}_{N}\left(t\right)$ suggests that quadratic variation of the Wiener Process on $\left[0,1\right]$ is $1$.

We will not rigorously prove that the total quadratic variation of the Wiener Process is $t$ with probability 1 because the proof requires deeper analytic tools. We will instead prove a pair of theorems close to the general deﬁnition of quadratic variation. First is a weak convergence version of the quadratic variation of the Wiener Process, see [1].

Theorem 1. Let $W\left(t\right)$ be the standard Wiener Process. For every ﬁxed $t>0$

$\underset{n\to \infty }{lim}𝔼\left[\sum _{k=1}^{n}{\left(W\left(\frac{kt}{n}\right)-W\left(\frac{\left(k-1\right)t}{n}\right)\right)}^{2}\right]=t.$

Proof. Consider

$\sum _{k=1}^{n}{\left(W\left(\frac{kt}{n}\right)-W\left(\frac{\left(k-1\right)t}{n}\right)\right)}^{2}$

Let

${Z}_{nk}=\frac{\left(W\left(\frac{kt}{n}\right)-W\left(\frac{\left(k-1\right)t}{n}\right)\right)}{\sqrt{t∕n}}$

Then for each $n$, the sequence ${Z}_{nk}$ is a sequence of independent, identically distributed $N\left(0,1\right)$ standard normal random variables. We can write the quadratic variation on the regularly spaced partition $0<1∕n<2∕n<\dots <\left(n-1\right)∕n<1$ as

$\begin{array}{llll}\hfill \sum _{k=1}^{n}{\left(W\left(\frac{kt}{n}\right)-W\left(\frac{\left(k-1\right)t}{n}\right)\right)}^{2}& =\sum _{k=1}^{n}\frac{t}{n}{Z}_{nk}^{2}\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =t\left(\frac{1}{n}\sum _{k=1}^{n}{Z}_{nk}^{2}\right).\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

But notice that the expectation $𝔼\left[{Z}_{nk}^{2}\right]$ of each term is the same as calculating the variance of a standard normal $N\left(0,1\right)$ which is $1$. Then

$𝔼\left[\frac{1}{n}\sum _{k=1}^{n}{Z}_{nk}^{2}\right]$

converges to $1$ by the Weak Law of Large Numbers. Therefore

$\underset{n\to \infty }{lim}𝔼\left[\sum _{k=1}^{n}{\left(W\left(\frac{kt}{n}\right)-W\left(\frac{\left(k-1\right)t}{n}\right)\right)}^{2}\right]=\underset{n\to \infty }{lim}𝔼\left[t\frac{1}{n}\sum _{k=1}^{n}{Z}_{nk}^{2}\right]=t.$

Remark. This proof is in itself not suﬃcient to prove the almost sure theorem above because it relies on the Weak Law of Large Numbers. Hence the theorem establishes convergence in distribution only, while for the strong theorem we want convergence almost surely. This is another example showing that is easier to prove a weak convergence theorem in contrast to an almost sure convergence theorem.

Theorem 2. Let $W\left(t\right)$ be the standard Wiener Process. For every ﬁxed $t>0$

$\underset{n\to \infty }{lim}\sum _{k=1}^{{2}^{n}}{\left[W\left(\frac{k}{{2}^{n}}t\right)-W\left(\frac{k-1}{{2}^{n}}t\right)\right]}^{2}=t$

with probability $1$ (that is, almost surely).

Proof. To introduce some briefer notation for the proof let

${\Delta }_{nk}=W\left(\frac{k}{{2}^{n}}t\right)-W\left(\frac{k-1}{{2}^{n}}t\right)\phantom{\rule{2em}{0ex}}k=1,\dots ,{2}^{n}$

and

${W}_{nk}={\Delta }_{nk}^{2}-t∕{2}^{n}\phantom{\rule{2em}{0ex}}k=1,\dots ,{2}^{n}.$

We want to show that ${\sum }_{k=1}^{{2}^{n}}{\Delta }_{nk}^{2}\to t$ or equivalently: ${\sum }_{k=1}^{{2}^{n}}{W}_{nk}\to 0$. For each $n$, the random variables ${W}_{nk},k=1,\dots ,{2}^{n}$ are independent and identically distributed by properties 1 and 2 of the deﬁnition of the standard Wiener Process. Furthermore,

$𝔼\left[{W}_{nk}\right]=𝔼\left[{\Delta }_{nk}^{2}\right]-t∕{2}^{n}=0$

by property 1 of the deﬁnition of the standard Wiener Process.

A routine (but omitted) computation of the fourth moment of the normal distribution shows that

$𝔼\left[{W}_{nk}^{2}\right]=2{t}^{2}∕{4}^{n}.$

Finally, by property 2 of the deﬁnition of the standard Wiener Process

$𝔼\left[{W}_{nk}{W}_{nj}\right]=0,k\ne j.$

Expanding the square of the sum, and applying all of these computations

$𝔼\left[{\left\{\sum _{k=1}^{{2}^{n}}{W}_{nk}\right\}}^{2}\right]=\sum _{k=1}^{{2}^{n}}𝔼\left[{W}_{nk}^{2}\right]={2}^{n+1}{t}^{2}∕{4}^{n}=2{t}^{2}∕{2}^{n}.$

Apply Chebyshev’s Inequality to see that

$ℙ\left[\left|\sum _{k=1}^{{2}^{n}}{W}_{nk}\right|>𝜖\right]\le \frac{2{t}^{2}}{{𝜖}^{2}}{\left(\frac{1}{2}\right)}^{n}.$

Since $\sum {\left(1∕2\right)}^{n}$ is a convergent series, the Borel-Cantelli lemma implies that the event

$\left|\sum _{k=1}^{{2}^{n}}{W}_{nk}\right|>𝜖$

can occur for only ﬁnitely many $n$. That is, for any $𝜖>0$, there is an $N$, such that for $n>N$

$\left|\sum _{k=1}^{{2}^{n}}{W}_{nk}\right|<𝜖.$

Therefore we must have that $\underset{n\to \infty }{lim}{\sum }_{k=1}^{{2}^{n}}{W}_{nk}=0$, and we have established what we wished to show. □

Remark. Starting from

$\underset{n\to \infty }{lim}\sum _{k=1}^{{2}^{n}}{\left[W\left(\frac{k}{{2}^{n}}t\right)-W\left(\frac{k-1}{{2}^{n}}t\right)\right]}^{2}=t$

and without thinking too carefully about what it might mean, we can imagine an elementary calculus limit of the left side and write the formula:

${\int }_{0}^{t}{\left[\phantom{\rule{0.3em}{0ex}}\mathrm{d}W\left(\tau \right)\right]}^{2}=t={\int }_{0}^{t}\phantom{\rule{0.3em}{0ex}}\mathrm{d}\tau .$

In fact, more advanced mathematics makes this sensible and mathematically sound. Now from this relation, we could write the integral equality in diﬀerential form:

$\phantom{\rule{0.3em}{0ex}}\mathrm{d}W{\left(\tau \right)}^{2}=\phantom{\rule{0.3em}{0ex}}\mathrm{d}\tau .$

The important thing to remember here is that the formula suggests that the Wiener Process has diﬀerentials that cannot be ignored in second (or squared, or quadratic) order.

Remark. This theorem can be nicely summarized in the following way: Let $\phantom{\rule{0.3em}{0ex}}\mathrm{d}W\left(t\right)=W\left(t+\phantom{\rule{0.3em}{0ex}}\mathrm{d}t\right)-W\left(t\right)$. Let $\phantom{\rule{0.3em}{0ex}}\mathrm{d}W{\left(t\right)}^{2}={\left(W\left(t+\phantom{\rule{0.3em}{0ex}}\mathrm{d}t\right)-W\left(t\right)\right)}^{2}$. Then (although mathematically not rigorously) it is helpful to remember

$\begin{array}{rcll}dW\left(t\right)& \sim & N\left(0,\phantom{\rule{0.3em}{0ex}}\mathrm{d}t\right)& \text{}\\ {\left(dW\left(t\right)\right)}^{2}& \sim & N\left(\phantom{\rule{0.3em}{0ex}}\mathrm{d}t,0\right).& \text{}\\ & & & \text{}\end{array}$

Theorem 3. Let $W\left(t\right)$ be the standard Wiener Process. For every ﬁxed $t>0$

$\underset{n\to \infty }{lim}\sum _{k=1}^{{2}^{n}}\left|W\left(\frac{k}{{2}^{n}}t\right)-W\left(\frac{k-1}{{2}^{n}}t\right)\right|=\infty .$

In other words, the total variation of a Wiener Process path is inﬁnite, with probability $1$.

Proof.

$\sum _{k=1}^{{2}^{n}}\left|W\left(\frac{k}{{2}^{n}}t\right)-W\left(\frac{k-1}{{2}^{n}}t\right)\right|\ge \frac{\sum _{k=1}^{{2}^{n}}{\left|W\left(\frac{k}{{2}^{n}}t\right)-W\left(\frac{k-1}{{2}^{n}}t\right)\right|}^{2}}{\underset{j=1,\dots ,{2}^{n}}{max}\left|W\left(\frac{k}{{2}^{n}}t\right)-W\left(\frac{k-1}{{2}^{n}}t\right)\right|}.$

The numerator on the right converges to $t$, while the denominator goes to $0$ because Wiener Process are continuous, therefore uniformly continuous on bounded intervals. Therefore the fraction on the right goes to inﬁnity. □

#### Section Ending Question

A function that we intuitively feel varies a lot is $f\left(t\right)=sin\left(t\right)$ because of the oscillations. We can strengthen this intuition by using the idea of inversion from a previous section. We intuitively feel that $f\left(t\right)=sin\left(1∕t\right)$ (with $f\left(0\right)=0$) varies a lot on the interval $\left[0,1\right]$. On the other hand, a constant function does not vary at all. These examples suggest that totaling the oscillations of a function should be a measure of the variation. The deﬁnitions and theorems of this section formalize that intuition.

#### Sources

The theorem in this section is drawn from A First Course in Stochastic Processes by S. Karlin and H. Taylor, Academic Press, 1975. The heuristic proof using the weak law was taken from Financial Calculus: An introduction to derivative pricing by M Baxter and A. Rennie, Cambridge University Press, 1996, page 59. The mnemonic statement of the quadratic variation in diﬀerential form is derived from Steele’s text.

_______________________________________________________________________________________________

### Algorithms, Scripts, Simulations

#### Algorithm

The simulation of the quadratic variation of the Wiener Process demonstrates how much the properties of the Wiener Process depend on the full limiting function, not an approximation. For simulation of quadratic variation of the Wiener Process using an approximation ${Ŵ}_{N}\left(x\right)$ of the Wiener Process, the mesh size of the partition should be approximately the same size as $1∕N$. This is easiest to illustrate when the number of partition points is a divisor or multiple of $N$ and evenly spaced. We already showed above that if the mesh points coincide with the $N$ steps of the scaled random walk, then the quadratic variation is $1$.

Consider the case when the number of evenly spaced partition points is a multiple of $N$, say $m=kN$. Then on each subinterval $\left[i∕m,\left(i+1\right)∕m\right]=\left[i∕\left(kN\right),\left(i+1\right)∕\left(kN\right)\right]$ the quadratic variation is ${\left(\left(±1∕\sqrt{T∕N}\right)\cdot \left(1∕\left(kN\right)\right)\right)}^{2}=1∕\left({k}^{2}NT\right)$ and the total of all $kN$ steps is $1∕\left(kT\right)$ which approaches $0$ as $k$ increases. This is not what is predicted by the theorem about the quadratic variation of the Wiener Process, but it is consistent with the approximation function ${Ŵ}_{N}\left(t\right)$, see the discussion below.

Now consider the case when $m$ is a divisor of $N$, say $km=N$. Then the quadratic variation of ${Ŵ}_{N}\left(t\right)$ on a partition interval will be the sum of $k$ scaled random walk steps, squared. An example will show what happens. With $T=1$, let $N=1000$ and let the partition have $200$ equally spaced points, so $k=5$. Then each partition interval will have a quadratic variation of:

• ${\left(1∕\sqrt{1000}\right)}^{2}$ with probability $2\cdot \left(\genfrac{}{}{0.0pt}{}{5}{1}\right)\cdot \left(1∕32\right)$;
• ${\left(3∕\sqrt{1000}\right)}^{2}$ with probability $2\cdot \left(\genfrac{}{}{0.0pt}{}{5}{3}\right)\cdot \left(1∕32\right)$; and
• ${\left(5∕\sqrt{1000}\right)}^{2}$ with probability $2\cdot \left(\genfrac{}{}{0.0pt}{}{5}{5}\right)\cdot \left(1∕32\right)$.

Each partition interval quadratic variation has a mean of $\left(1∕1000\right)\cdot \left(160∕32\right)=1∕200$ and a variance of ${\left(1∕1000\right)}^{2}\cdot \left(1280∕32\right)=40∕{\left(1000\right)}^{2}$. Add $200$ partition interval quadratic variations to get the total quadratic variation of ${Ŵ}_{N}\left(t\right)$ on $\left[0,1\right]$. By the Central Limit Theorem the total quadratic variation will be approximately normally distributed with mean $1$ and standard deviation $\sqrt{200}\cdot \left(\sqrt{40}∕1000\right)=\sqrt{5}∕25\approx 0.089$.

Note that although ${Ŵ}_{N}\left(t\right)$ does not have a continuous ﬁrst derivative, it fails to have a derivative at only ﬁnitely many points, so in that way it is almost diﬀerentiable. In fact, ${Ŵ}_{N}\left(t\right)$ satisﬁes a uniform Lipschitz condition (with Lipschitz constant $\sqrt{N∕T}$), which is enough to show that it has bounded variation. As such the quadratic variation of ${Ŵ}_{N}\left(t\right)$ is $0$.

#### Scripts

R
1p <- 0.5
2N <- 1000
3
4T <- 1
5
6S <- array(0, c(N+1))
7rw <- cumsum( 2 * ( runif(N) <= p)-1 )
8S[2:(N+1)] <- rw
9
10WcaretN <- function(x) {
11    Delta <- T/N
12
13    # add 1 since arrays are 1-based
14    prior = floor(x/Delta) + 1
15    subsequent = ceiling(x/Delta) + 1
16
17    retval <- sqrt(Delta)*(S[prior] + ((x/Delta+1) - prior)*(S[subsequent] - S[prior]))
18}
19
20m1 <- N/5
21partition1 <- seq(0,T,1/m1)
22m2 <- N
23partition2 <- seq(0,T,1/m2)
24m3 <- 3*N
25partition3 <- seq(0,T,1/m3)
26
27qv1 <- sum( ( WcaretN( partition1[-1]) - WcaretN( partition1[-length(partition1)]))^2 )
28qv2 <- sum( ( WcaretN( partition2[-1]) - WcaretN( partition2[-length(partition2)]))^2 )
29qv3 <- sum( ( WcaretN( partition3[-1]) - WcaretN( partition3[-length(partition3)]))^2 )
30
31cat(sprintf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m1, qv1 ))
32cat(sprintf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m2, qv2 ))
33cat(sprintf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m3, qv3 ))
0cAp0x1-1200034:
Octave
1p = 0.5;
2
3global N = 1000;
4global T = 1;
5
6global S
7S = zeros(N+1, 1);
8S(2:N+1) = cumsum( 2 * (rand(N,1)<=p) - 1);
9
10function retval = WcaretN(x)
11  global N;
12  global T;
13  global S;
14  step = T/N;
15
16  # add 1 since arrays are 1-based
17  prior = floor(x/step) + 1;
18  subsequent = ceil(x/step) + 1;
19
20  retval = sqrt(step)*(S(prior) + ((x/step+1) - prior).*(S(subsequent)-S(prior)));
21
22endfunction
23
24m1 = N/5;
25partition1 = transpose(linspace(0,T, m1+1));
26m2 = N;
27partition2 = transpose(linspace(0,T, m2+1));
28m3 = 3*N;
29partition3 = transpose(linspace(0,T, m3+1));
30
31qv1 = sumsq( WcaretN(partition1(2:m1+1)) - WcaretN(partition1(1:m1)) );
32qv2 = sumsq( WcaretN(partition2(2:m2+1)) - WcaretN(partition2(1:m2)) );
33qv3 = sumsq( WcaretN(partition3(2:m3+1)) - WcaretN(partition3(1:m3)) );
34
35printf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m1, qv1 )
36printf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m2, qv2 )
37printf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m3, qv3 )
0cAp1x1-1200038:
Perl
1use PDL::NiceSlice;
2
3$p = 0.5; 4 5$N = 1000;
6$T = 1; 7 8# the random walk 9$S = zeros( $N + 1 ); 10$S ( 1 : $N ) .= cumusumover( 2 * ( random($N) <= $p ) - 1 ); 11 12# function WcaretN interpolating random walk 13sub WcaretN { 14 my$x = shift @_;
15    $Delta =$T / $N; 16 17$prior      = floor( $x /$Delta );
18    $subsequent = ceil($x / $Delta ); 19 20$retval =
21        sqrt($Delta) 22 * ($S ($prior) 23 + ( ($x / $Delta ) -$prior )
24            * ( $S ($subsequent) - $S ($prior) ) );
25}
26
27$m1 =$N/5;
28$partition1 = zeroes($m1+1)->xlinvals( 0, $T ); 29$m2 = $N; 30$partition2 = zeroes($m2+1)->xlinvals( 0,$T );
31$m3 = 3*$N;
32$partition3 = zeroes($m3+1)->xlinvals( 0, $T ); 33 34$qv1 = sum( (WcaretN($partition1(1:$m1)) - WcaretN($partition1(0:$m1-1)))**2 );
35$qv2 = sum( (WcaretN($partition2(1:$m2)) - WcaretN($partition2(0:$m2-1)))**2 ); 36$qv3 = sum( (WcaretN($partition3(1:$m3)) - WcaretN($partition3(0:$m3-1)))**2 );
37
38print "Quadratic variation of approximation of  Wiener process paths with ", $N, "scaled random steps with",$m1, " partition intervals is:", $qv1, "\n"; 39print "Quadratic variation of approximation of Wiener process paths with ",$N, "scaled random steps with", $m2, " partition intervals is:",$qv2, "\n";
40print "Quadratic variation of approximation of  Wiener process paths with ", $N, "scaled random steps with",$m3, " partition intervals is:", \$qv3, "\n";
0cAp2x1-1200041:
SciPy
1import scipy
2
3p = 0.5
4
5N = 1000
6T = 1.
7
8# the random walk
9S = scipy.zeros(N+1)
10S[1:N+1] = scipy.cumsum( 2*( scipy.random.random(N) <= p ) - 1 )
11
12def WcaretN(x):
13    Delta = T/N
14    prior = scipy.floor(x/Delta).astype(int)
15    subsequent = scipy.ceil(x/Delta).astype(int)
16    return scipy.sqrt(Delta)*(S[prior] + (x/Delta - prior)*(S[subsequent] - S[prior]))
17
18m1 = N/5
19partition1 = scipy.linspace( 0, T, m1+1)
20m2 = N
21partition2 = scipy.linspace( 0, T, m2+1)
22m3 = 3*N
23partition3 = scipy.linspace( 0, T, m3+1)
24
25qv1 = scipy.sum( (WcaretN( partition1[1:m1+1] ) - WcaretN(partition1[0:m1] ) )**2)
26qv2 = scipy.sum( (WcaretN( partition2[1:m2+1] ) - WcaretN(partition2[0:m2] ) )**2)
27qv3 = scipy.sum( (WcaretN( partition3[1:m3+1] ) - WcaretN(partition3[0:m3] ) )**2)
28
29print "Quadratic variation of approximation of  Wiener process paths with ", N, "scaled random steps with ", m1, "partition intervals is: ", qv1
30print "Quadratic variation of approximation of  Wiener process paths with ", N, "scaled random steps with ", m2, "partition intervals is: ", qv2
31print "Quadratic variation of approximation of  Wiener process paths with ", N, "scaled random steps with ", m3, "partition intervals is: ", qv3
0cAp3x1-1200032:

### Problems to Work for Understanding

1.
Show that a monotone increasing function has bounded variation.
2.
Show that a function with continuous derivative has bounded variation.
3.
Show that the function
$f\left(t\right)=\left\{\begin{array}{cc}{t}^{2}sin\left(1∕t\right)\phantom{\rule{1em}{0ex}}\hfill & 0

is of bounded variation, while the function

$f\left(t\right)=\left\{\begin{array}{cc}tsin\left(1∕t\right)\phantom{\rule{1em}{0ex}}\hfill & 0

is not of bounded variation.

4.
Show that a continuous function of bounded variation is also of quadratic variation.
5.
Show that the fourth moment $𝔼\left[{Z}^{4}\right]=3$ where $Z\sim N\left(0,1\right)$. Then show that
$𝔼\left[{W}_{nk}^{2}\right]=2{t}^{2}∕{4}^{n}$

6.
Modifying one of the scripts, ﬁnd the quadratic variation of ${Ŵ}_{N}\left(t\right)$ with a partition with $m$ partition intervals whose endpoints are randomly selected values in $\left[0,T\right]$. One way to approach this is to create a list of $m-1$ points uniformly distributed in $\left[0,T\right]$, append the values $0$ and $T$ to the list, then sort into ascending order to create the partition points. Find the mesh of this random partition and print its value along with the quadratic variation. What happens when $m$ is a multiple of $N$? What happens when $m$ is a divisor of $N$? What could possibly go wrong in this calculation?
7.
Generalize the example with $N=1000$ and $m=200$ of the quadratic variation of ${Ŵ}_{N}\left(t\right)$ on $\left[0,1\right]$ to the case when the number of partition intervals $m$ is a divisor of some $N$, say $km=N$.

__________________________________________________________________________

### References

[1]   M. Baxter and A. Rennie. Financial Calculus: An introduction to derivative pricing. Cambridge University Press, 1996. HG 6024 A2W554.

[2]   S. Karlin and H. Taylor. A Second Course in Stochastic Processes. Academic Press, 1981.

[3]   J. Michael Steele. Stochastic Calculus and Financial Applications. Springer-Verlag, 2001. QA 274.2 S 74.

__________________________________________________________________________

1.
Wikipedia, Quadratic variation Contributed by S. Dunbar, November 10, 2009.
2.
Michael Kozdron, University of Regina, Contributed by S. Dunbar, November 10, 2009.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.