Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and
Advanced Mathematical Finance

__________________________________________________________________________

Quadratic Variation of the Wiener Process

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

Rating

Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

Section Starter Question

Section Starter Question

What is an example of a function that “varies a lot”? What is an example of a function that does not “vary a lot”? How would you measure the “variation” of a function?

_______________________________________________________________________________________________

Key Concepts

Key Concepts

  1. The total quadratic variation of the Wiener Process on [0,t] is t.
  2. This fact has profound consequences for dealing with the Wiener Process analytically and ultimately will lead to Itô’s formula.

__________________________________________________________________________

Vocabulary

Vocabulary

  1. A function f(t) is said to have bounded variation if, over the closed interval [a,b], there exists an M such that
    |f(t1) f(a)| + |f(t2) f(t1)| + + |f(b) f(tn)| M

    for all partitions a = t0 < t1 < t2 < < tn < tn+1 = b of the interval.

  2. A function f(t) is said to have quadratic variation if, over the closed interval [a,b], there exists an M such that
    (f(t1) f(a))2 + (f(t 2) f(t1))2 + + (f(b) f(t n))2 M

    for all partitions a = t0 < t1 < t2 < < tn < tn+1 = b of the interval.

  3. The mesh size of a partition P with a = t0 < t1 < < tn < tn+1 = b is max j=0,,n{tj+1 tj|j = 0,,n}.
  4. The total quadratic variation of a function f on an interval [a,b] is
    sup P j=0n(f(t j+1) f(tj))2

    where the supremum is taken over all partitions P with a = t0 < t1 < < tn < tn+1 = b, with mesh size going to zero as the number of partition points n goes to infinity.

__________________________________________________________________________

Mathematical Ideas

Mathematical Ideas

Variation

Definition. A function f(x) is said to have bounded variation if, over the closed interval [a,b], there exists an M such that

|f(t1) f(a)| + |f(t2) f(t1)| + + |f(b) f(tn)| M

for all partitions a = t0 < t1 < t2 < < tn < tn+1 = b of the interval.

The idea is that we measure the total (hence the absolute value) up-and-down movement of a function. This definition is similar to other partition based definitions such as the Riemann integral and the arc-length of the graph of the function. A monotone increasing or decreasing function has bounded variation. A function with a continuous derivative has bounded variation. Some functions, for instance the Wiener Process, do not have bounded variation.

Definition. A function f(t) is said to have quadratic variation if, over the closed interval [a,b], there exists an M such that

(f(t1) f(a))2 + (f(t 2) f(t1))2 + + (f(b) f(t n))2 M

for all partitions a = t0 < t1 < t2 < < tn < tn+1 = b of the interval.

Again, the idea is that we measure the total (hence the positive terms created by squaring) up-and-down movement of a function. However, the squaring will make small ups-and-downs smaller, so that a function without bounded variation might have quadratic variation. In fact, this is the case for the Wiener Process.

Definition. The total quadratic variation of Q of a function f on an interval [a,b] is

Q = sup P i=0n(f(t i+1) f(ti))2

where the supremum is taken over all partitions P with a = t0 < t1 < < tn < tn+1 = b, with mesh size going to zero as the number of partition points n goes to infinity.

Quadratic Variation of the Wiener Process

We can guess that the Wiener Process might have quadratic variation by considering the quadratic variation of the approximation using a coin-flipping fortune. Consider the piecewise linear function ŴN(t) on [0, 1] defined by the sequence of sums (1N)Tn = (1N)Y 1 + + (1N)Y n from the Bernoulli random variables Y i = +1 with probability p = 12 and Y i = 1 with probability q = 1 p = 12. With some analysis, it is possible to show that we need only consider the quadratic variation at the node points. Then each term (ŴN((i + 1)N) ŴN(iN))2 = (1N)Y i+12 = 1N. Therefore, the quadratic variation using the total number of steps is Q = (1N) N = 1. Now remembering the Wiener Process is approximated by ŴN(t) suggests that quadratic variation of the Wiener Process on [0, 1] is 1.

We will not rigorously prove that the total quadratic variation of the Wiener Process is t with probability 1 because the proof requires deeper analytic tools. We will instead prove a pair of theorems close to the general definition of quadratic variation. First is a weak convergence version of the quadratic variation of the Wiener Process, see [1].

Theorem 1. Let W(t) be the standard Wiener Process. For every fixed t > 0

lim n𝔼 k=1n W kt n W (k 1)t n 2. = t.

Proof. Consider

k=1n W kt n W (k 1)t n 2.

Let

Znk = W kt n W (k1)t n tn

Then for each n, the sequence Znk is a sequence of independent, identically distributed N(0, 1) standard normal random variables. We can write the quadratic variation on the regularly spaced partition 0 < 1n < 2n < < (n 1)n < 1 as

k=1n W kt n W (k 1)t n 2. = k=1n t nZnk2 = t 1 n k=1nZ nk2 .

But notice that the expectation 𝔼 Znk2 of each term is the same as calculating the variance of a standard normal N(0, 1) which is 1. Then

𝔼 1 n k=1nZ nk2

converges to 1 by the Weak Law of Large Numbers. Therefore

lim n𝔼 k=1n W kt n W (k 1)t n 2 = lim n𝔼 t1 n k=1nZ nk2 = t.

Remark. This proof is in itself not sufficient to prove the almost sure theorem above because it relies on the Weak Law of Large Numbers. Hence the theorem establishes convergence in distribution only while for the strong theorem we want convergence almost surely. This is another example showing that is easier to prove a weak convergence theorem in contrast to an almost sure convergence theorem.

Theorem 2. Let W(t) be the standard Wiener Process. For every fixed t > 0

lim n n=12n W k 2nt W k 1 2n t2 = t

with probability 1 (that is, almost surely).

Proof. Introduce some briefer notation for the proof, let:

Δnk = W k 2nt W k 1 2n tk = 1,, 2n

and

Wnk = Δnk2 t2nk = 1,, 2n.

We want to show that k=12nΔ nk2 t or equivalently: k=12nW nk 0. For each n, the random variables Wnk,k = 1,, 2n are independent and identically distributed by properties 1 and 2 of the definition of the standard Wiener Process. Furthermore,

𝔼 Wnk = 𝔼 Δnk2 t2n = 0

by property 1 of the definition of the standard Wiener Process.

A routine (but omitted) computation of the fourth moment of the normal distribution shows that

𝔼 Wnk2 = 2t24n.

Finally, by property 2 of the definition of the standard Wiener Process

𝔼 WnkWnj = 0,kj.

Now, expanding the square of the sum, and applying all of these computations

𝔼 k=12n Wnk 2 = k=12n 𝔼 Wnk2 = 2n+1t24n = 2t22n.

Now apply Chebyshev’s Inequality to see:

k=12n Wnk > 𝜖 2t2 𝜖2 1 2 n.

Now since (12)n is a convergent series, the Borel-Cantelli lemma implies that the event

k=12n Wnk > 𝜖

can occur for only finitely many n. That is, for any 𝜖 > 0, there is an N, such that for n > N

k=12n Wnk < 𝜖.

Therefore we must have that lim n k=12nW nk = 0, and we have established what we wished to show. □

Remark. Starting from

lim n n=12n W k 2nt W k 1 2n t2 = t

and without thinking too carefully about what it might mean, we can imagine an elementary calculus limit to the left side and write the formula:

0t[dW(τ)]2 = t =0tdτ.

In fact, more advanced mathematics makes this sensible and mathematically sound. Now from this relation, we could write the integral equality in differential form:

dW(τ)2 = dτ.

The important thing to remember here is that the formula suggests that the Wiener Process has differentials that cannot be ignored in second (or squared, or quadratic) order.

Remark. This theorem can be nicely summarized in the following way: Let dW(t) = W(t + dt) W(t). Let dW(t)2 = (W(t + dt) W(t))2. Then (although mathematically not rigorously) it is helpful to remember

dW(t) N(0,dt) (dW(t))2 N(dt, 0).

Theorem 3. Let W(t) be the standard Wiener Process. For every fixed t > 0

lim n n=12n W k 2nt W k 1 2n t = .

In other words, the total variation of a Wiener Process path is infinite, with probability 1.

Proof.

n=12n W k 2nt W k 1 2n t n=12n W k 2nt W k1 2n t2 max j=1,,2n W k 2nt W k1 2n t.

The numerator on the right converges to t, while the denominator goes to 0 because Wiener Process are continuous, therefore uniformly continuous on bounded intervals. Therefore the fraction on the right goes to infinity. □

Sources

The theorem in this section is drawn from A First Course in Stochastic Processes by S. Karlin, and H. Taylor, Academic Press, 1975. The heuristic proof using the weak law was taken from Financial Calculus: An introduction to derivative pricing by M Baxter, and A. Rennie, Cambridge University Press, 1996, page 59. The mnemonic statement of the quadratic variation in differential form is derived from Steele’s text.

_______________________________________________________________________________________________

Algorithms, Scripts, Simulations

Algorithms, Scripts, Simulations

Algorithm

The simulation of the quadratic variation of the Wiener Process demonstrates how much the properties of the Wiener Process depend on the full limiting function, not an approximation. For simulation of quadratic variation of the Wiener Process using an approximation ŴN(x) of the Wiener Process, the mesh size of the partition should be approximately the same size as 1N. This is easiest to illustrate when the number of partition points is a divisor or multiple of N and evenly spaced. We already showed above that if the mesh points coincide with the N steps of the scaled random walk, then the quadratic variation is 1.

Consider the case when the number of evenly spaced partition points is a multiple of N, say m = kN. Then on each subinterval [im, (i + 1)m] = [i(kN), (i + 1)(kN)] the quadratic variation is ((±1TN) (1(kN)))2 = 1(k2NT) and the total of all kN steps is 1(kT) which approaches 0 as k increases. This is not what is predicted by the theorem about the quadratic variation of the Wiener Process, but it is consistent with the approximation function ŴN(t), see the discussion below.

Now consider the case when m is a divisor of N, say km = N. Then the quadratic variation of ŴN(t) on a partition interval will be the sum of k scaled random walk steps, squared. An example will show what happens. With T = 1, let N = 1000 and let the partition have 200 equally spaced points, so k = 5. Then each partition interval will have a quadratic variation of:

Each partition interval quadratic variation has a mean of (11000) (16032) = 1200 and a variance of (11000)2 (128032) = 40(1000)2. Add 200 partition interval quadratic variations to get the total quadratic variation of ŴN(t) on [0, 1]. By the Central Limit Theorem the total quadratic variation will be approximately normally distributed with mean 1 and standard deviation 200 (401000) = 525 0.089.

Note that although ŴN(t) does not have a continuous first derivative, it fails to have a derivative at only finitely many points, so in that way it is almost differentiable. In fact, ŴN(t) satisfies a uniform Lipschitz condition (with Lipschitz constant NT), which is enough to show that it has bounded variation. As such the quadratic variation of ŴN(t) is 0.

Scripts

R

R script for Quadratic Variation..

1p <- 0.5 
2N <- 1000 
3 
4T <- 1 
5 
6S <- array(0, c(N+1)) 
7rw <- cumsum( 2 * ( runif(N) <= p)-1 ) 
8S[2:(N+1)] <- rw 
9 
10WcaretN <- function(x) { 
11    Delta <- T/N 
12 
13    # add 1 since arrays are 1-based 
14    prior = floor(x/Delta) + 1 
15    subsequent = ceiling(x/Delta) + 1 
16 
17    retval <- sqrt(Delta)*(S[prior] + ((x/Delta+1) - prior)*(S[subsequent] - S[prior])) 
18} 
19 
20m1 <- N/5 
21partition1 <- seq(0,T,1/m1) 
22m2 <- N 
23partition2 <- seq(0,T,1/m2) 
24m3 <- 3*N 
25partition3 <- seq(0,T,1/m3) 
26 
27qv1 <- sum( ( WcaretN( partition1[-1]) - WcaretN( partition1[-length(partition1)]))^2 ) 
28qv2 <- sum( ( WcaretN( partition2[-1]) - WcaretN( partition2[-length(partition2)]))^2 ) 
29qv3 <- sum( ( WcaretN( partition3[-1]) - WcaretN( partition3[-length(partition3)]))^2 ) 
30 
31cat(sprintf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m1, qv1 )) 
32cat(sprintf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m2, qv2 )) 
33cat(sprintf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m3, qv3 ))
Octave

Octave script for Quadratic Variation .

1p = 0.5; 
2 
3global N = 1000; 
4global T = 1; 
5 
6global S 
7S = zeros(N+1, 1); 
8S(2:N+1) = cumsum( 2 * (rand(N,1)<=p) - 1); 
9 
10function retval = WcaretN(x) 
11  global N; 
12  global T; 
13  global S; 
14  step = T/N; 
15 
16  # add 1 since arrays are 1-based 
17  prior = floor(x/step) + 1; 
18  subsequent = ceil(x/step) + 1; 
19 
20  retval = sqrt(step)*(S(prior) + ((x/step+1) - prior).*(S(subsequent)-S(prior))); 
21 
22endfunction 
23 
24m1 = N/5; 
25partition1 = transpose(linspace(0,T, m1+1)); 
26m2 = N; 
27partition2 = transpose(linspace(0,T, m2+1)); 
28m3 = 3*N; 
29partition3 = transpose(linspace(0,T, m3+1)); 
30 
31qv1 = sumsq( WcaretN(partition1(2:m1+1)) - WcaretN(partition1(1:m1)) ); 
32qv2 = sumsq( WcaretN(partition2(2:m2+1)) - WcaretN(partition2(1:m2)) ); 
33qv3 = sumsq( WcaretN(partition3(2:m3+1)) - WcaretN(partition3(1:m3)) ); 
34 
35printf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m1, qv1 ) 
36printf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m2, qv2 ) 
37printf("Quadratic variation of approximation of  Wiener process paths with %d scaled random steps with %d partition intervals is: %f \n", N, m3, qv3 )
Perl

Perl PDL script for Quadratic Variation .

1use PDL::NiceSlice; 
2 
3$p = 0.5; 
4 
5$N = 1000; 
6$T = 1; 
7 
8# the random walk 
9$S = zeros( $N + 1 ); 
10$S ( 1 : $N ) .= cumusumover( 2 * ( random($N) <= $p ) - 1 ); 
11 
12# function WcaretN interpolating random walk 
13sub WcaretN { 
14    my $x = shift @_; 
15    $Delta = $T / $N; 
16 
17    $prior      = floor( $x / $Delta ); 
18    $subsequent = ceil( $x / $Delta ); 
19 
20    $retval = 
21        sqrt($Delta) 
22        * ( $S ($prior) 
23            + ( ( $x / $Delta ) - $prior ) 
24            * ( $S ($subsequent) - $S ($prior) ) ); 
25} 
26 
27$m1 = $N/5; 
28$partition1 = zeroes($m1+1)->xlinvals( 0, $T ); 
29$m2 = $N; 
30$partition2 = zeroes($m2+1)->xlinvals( 0, $T ); 
31$m3 = 3*$N; 
32$partition3 = zeroes($m3+1)->xlinvals( 0, $T ); 
33 
34$qv1 = sum( (WcaretN($partition1(1:$m1)) - WcaretN($partition1(0:$m1-1)))**2 ); 
35$qv2 = sum( (WcaretN($partition2(1:$m2)) - WcaretN($partition2(0:$m2-1)))**2 ); 
36$qv3 = sum( (WcaretN($partition3(1:$m3)) - WcaretN($partition3(0:$m3-1)))**2 ); 
37 
38print "Quadratic variation of approximation of  Wiener process paths with ", $N, "scaled random steps with", $m1, " partition intervals is:", $qv1, "\n"; 
39print "Quadratic variation of approximation of  Wiener process paths with ", $N, "scaled random steps with", $m2, " partition intervals is:", $qv2, "\n"; 
40print "Quadratic variation of approximation of  Wiener process paths with ", $N, "scaled random steps with", $m3, " partition intervals is:", $qv3, "\n";
SciPy

Scientific Python script for Quadratic Variation .

1import scipy 
2 
3p = 0.5 
4 
5N = 1000 
6T = 1. 
7 
8# the random walk 
9S = scipy.zeros(N+1) 
10S[1:N+1] = scipy.cumsum( 2*( scipy.random.random(N) <= p ) - 1 ) 
11 
12def WcaretN(x): 
13    Delta = T/N 
14    prior = scipy.floor(x/Delta).astype(int) 
15    subsequent = scipy.ceil(x/Delta).astype(int) 
16    return scipy.sqrt(Delta)*(S[prior] + (x/Delta - prior)*(S[subsequent] - S[prior])) 
17 
18m1 = N/5 
19partition1 = scipy.linspace( 0, T, m1+1) 
20m2 = N 
21partition2 = scipy.linspace( 0, T, m2+1) 
22m3 = 3*N 
23partition3 = scipy.linspace( 0, T, m3+1) 
24 
25qv1 = scipy.sum( (WcaretN( partition1[1:m1+1] ) - WcaretN(partition1[0:m1] ) )**2) 
26qv2 = scipy.sum( (WcaretN( partition2[1:m2+1] ) - WcaretN(partition2[0:m2] ) )**2) 
27qv3 = scipy.sum( (WcaretN( partition3[1:m3+1] ) - WcaretN(partition3[0:m3] ) )**2) 
28 
29print "Quadratic variation of approximation of  Wiener process paths with ", N, "scaled random steps with ", m1, "partition intervals is: ", qv1 
30print "Quadratic variation of approximation of  Wiener process paths with ", N, "scaled random steps with ", m2, "partition intervals is: ", qv2 
31print "Quadratic variation of approximation of  Wiener process paths with ", N, "scaled random steps with ", m3, "partition intervals is: ", qv3

Problems to Work

Problems to Work for Understanding

  1. Show that a monotone increasing function has bounded variation.
  2. Show that a function with continuous derivative has bounded variation.
  3. Show that the function
    f(t) = t2 sin(1t)0 < t 1 0 t = 0

    is of bounded variation, while the function

    f(t) = t sin(1t)0 < t 1 0 t = 0

    is not of bounded variation.

  4. Show that a continuous function of bounded variation is also of quadratic variation.
  5. Show that the fourth moment 𝔼 Z4 = 3 where Z N(0, 1). Then show that
    𝔼 Wnk2 = 2t24n

  6. Modifying one of the scripts, find the quadratic variation of ŴN(t) with a partition with m partition intervals whose endpoints are randomly selected values in [0,T]. One way to approach this is to create a list of m 1 points uniformly distributed in [0,T], append the values 0 and T to the list, then sort into ascending order to create the partition points. Find the mesh of this random partition and print its value along with the quadratic variation. What happens when m is a multiple of N? What happens when m is a divisor of N? What could possibly go wrong in this calculation?
  7. Generalize the example with N = 1000 and m = 200 of the quadratic variation of ŴN(t) on [0, 1] to the case when the number of partition intervals m is a divisor of some N, say km = N.

__________________________________________________________________________

Books

Reading Suggestion:

References

[1]   M. Baxter and A. Rennie. Financial Calculus: An introduction to derivative pricing. Cambridge University Press, 1996. HG 6024 A2W554.

[2]   S. Karlin and H. Taylor. A Second Course in Stochastic Processes. Academic Press, 1981.

[3]   J. Michael Steele. Stochastic Calculus and Financial Applications. Springer-Verlag, 2001. QA 274.2 S 74.

__________________________________________________________________________

Links

Outside Readings and Links:

  1. Wikipedia, Quadratic variation. Contributed by S. Dunbar, November 10, 2009.
  2. Michael Kozdron, University of Regina., Contributed by S. Dunbar, November 10, 2009.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable effort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reflects the thoughts, interests and opinions of its author. They do not explicitly represent official positions or policies of my employer.

Information on this website is subject to change without notice.

Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1

Email to Steve Dunbar, sdunbar1 at unl dot edu

Last modified: Processed from LATEX source on August 2, 2016