Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and

__________________________________________________________________________

Transformations of the Wiener Process

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

### Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

### Section Starter Question

Suppose you know the graph $y=f\left(x\right)$ of the function $f\left(x\right)$. What is the eﬀect on the graph of the transformation $f\left(x+h\right)-f\left(h\right)$? What is the eﬀect on the graph of the transformation $f\left(1∕x\right)$? Consider the function $f\left(x\right)=sin\left(x\right)$ as an example.

_______________________________________________________________________________________________

### Key Concepts

1. Three transformations of the Wiener process produce another Wiener process. The transformations are scaling, inversion and translation. These results are especially helpful when studying the properties of the sample paths of Brownian motion.

__________________________________________________________________________

### Vocabulary

1. Scaling, also called re-scaling, is the transformation of $f\left(t\right)$ to $bf\left(t∕a\right)$ that expands or contracts the time axis (as $a>1$ or $a<1$) and expands or contracts the dependent variable scale (as $b>1$ or $b<1$).
2. Translation, also called shifting is the transformation of $f\left(t\right)$ to $f\left(t+h\right)$ or sometimes $f\left(t\right)$ to $f\left(t+h\right)-f\left(h\right)$.
3. Inversion is the transformation of $f\left(t\right)$ to $f\left(1∕t\right)$. It “ﬂips” the independent variable axis about $1$, so that the interval $\left(0,1\right)$ is “inverted” to the interval $\left(1,\infty \right)$.

__________________________________________________________________________

### Mathematical Ideas

#### Transformations of the Wiener Process

A set of transformations of the Wiener process produce the Wiener process again. Since these transformations result in the Wiener process, each tells us something about the “shape” and “characteristics” of the Wiener process. These results are especially helpful when studying the properties of the Wiener process sample paths. The ﬁrst of these transformations is a time homogeneity that says the Wiener process can be re-started anywhere. The second says that the Wiener process may be rescaled in time and space. The third is an inversion. Roughly, each of these says the Wiener process is self-similar in various ways. See the comments after the proof for more detail.

Theorem 1. Let

1. ${W}_{\text{shift}}\left(t\right)=W\left(t+h\right)-W\left(h\right)$, for ﬁxed $h>0$.
2. ${W}_{\text{scale}}\left(t\right)=cW\left(t∕{c}^{2}\right)$, for ﬁxed $c>0$.

Then each of ${W}_{\text{shift}}\left(t\right)$ and ${W}_{\text{scale}}\left(t\right)$ are a version of the Standard Wiener Process.

Proof. We have to systematically check each of the deﬁning properties of the Wiener process in turn for each of the transformed processes.

1. ${W}_{\text{shift}}\left(t\right)=W\left(t+h\right)-W\left(h\right).$

1. The increment is $\begin{array}{llll}\hfill & {W}_{\text{shift}}\left(t+s\right)-{W}_{\text{shift}}\left(s\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=\left[W\left(t+s+h\right)-W\left(h\right)\right]-\left[W\left(s+h\right)-W\left(h\right)\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=W\left(t+s+h\right)-W\left(s+h\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

which is by deﬁnition normally distributed with mean $0$ and variance $t$.

2. The increment
${W}_{\text{shift}}\left({t}_{4}\right)-{W}_{\text{shift}}\left({t}_{3}\right)=W\left({t}_{4}+h\right)-W\left({t}_{3}+h\right)$

is independent from

${W}_{\text{shift}}\left({t}_{2}\right)-{W}_{\text{shift}}\left({t}_{1}\right)=W\left({t}_{2}+h\right)-W\left({t}_{1}+h\right)$

by the property of independence of disjoint increments of $W\left(t\right)$.

3. ${W}_{\text{shift}}\left(0\right)=W\left(0+h\right)-W\left(h\right)=0.$

4. As the composition and diﬀerence of continuous functions, ${W}_{\text{shift}}$ is continuous.
2. ${W}_{\text{scale}}\left(t\right)=cW\left(t∕{c}^{2}\right)$

1. The increment $\begin{array}{llll}\hfill {W}_{\text{scale}}\left(t\right)-{W}_{\text{scale}}\left(s\right)& =cW\left(\left(t\right)∕{c}^{2}\right)-cW\left(s∕{c}^{2}\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =c\left(W\left(t∕{c}^{2}\right)-W\left(s∕{c}^{2}\right)\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

is normally distributed because it is a multiple of a normally distributed random variable. Since the increment $W\left(t∕{c}^{2}\right)-W\left(s∕{c}^{2}\right)$ has mean zero, then

${W}_{\text{scale}}\left(t\right)-{W}_{\text{scale}}\left(s\right)=c\left(W\left(t∕{c}^{2}\right)-W\left(s∕{c}^{2}\right)\right)$

must have mean zero. The variance is

$\begin{array}{llll}\hfill 𝔼\left[{\left({W}_{\text{scale}}\left(t\right)-W\left(s\right)\right)}^{2}\right]& =𝔼\left[{\left(cW\left(\left(t\right)∕{c}^{2}\right)-cW\left(s∕{c}^{2}\right)\right)}^{2}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={c}^{2}𝔼\left[{\left(W\left(t∕{c}^{2}\right)-W\left(s∕{c}^{2}\right)\right)}^{2}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={c}^{2}\left(t∕{c}^{2}-s∕{c}^{2}\right)=t-s.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$
2. Note that if ${t}_{1}<{t}_{2}\le {t}_{3}<{t}_{4}$, then ${t}_{1}∕{c}^{2}<{t}_{2}∕{c}^{2}\le {t}_{3}∕{c}^{2}<{t}_{4}∕{c}^{2}$, and the corresponding increments $W\left({t}_{4}∕{c}^{2}\right)-W\left({t}_{3}∕{c}^{2}\right)$ and $W\left({t}_{2}∕{c}^{2}\right)-W\left({t}_{1}∕{c}^{2}\right)$ are independent. Then the multiples of each by $c$ are independent and so ${W}_{\text{scale}}\left({t}_{4}\right)-{W}_{\text{scale}}\left({t}_{3}\right)$ and ${W}_{\text{scale}}\left({t}_{2}\right)-{W}_{\text{scale}}\left({t}_{1}\right)$ are independent.
3. ${W}_{\text{scale}}\left(0\right)=cW\left(0∕{c}^{2}\right)=cW\left(0\right)=0$.
4. As the composition of continuous functions, ${W}_{\text{scale}}$ is continuous.

Theorem 2. Suppose $W\left(t\right)$ is a Standard Wiener Process. Then the transformed processes ${W}_{\text{inv}}\left(t\right)=tW\left(1∕t\right)$ for $t>0$, ${W}_{\text{inv}}\left(t\right)=0$ for $t=0$ is a version of the Standard Wiener Process.

Proof. To show that

${W}_{\text{inv}}\left(t\right)=tW\left(1∕t\right)$

is a Wiener process by the four deﬁning properties requires another fact which is outside the scope of the text. The fact is that any Gaussian process $X\left(t\right)$ with mean $0$ and $Cov\left[X\left(s\right),X\left(t\right),=\right]min\left(s,t\right)$ must be the Wiener process. See the references and outside links for more information. Using this information, a partial proof follows:

1. ${W}_{\text{inv}}\left(t\right)-{W}_{\text{inv}}\left(s\right)=tW\left(1∕t\right)-sW\left(1∕s\right)$

is the diﬀerence of normally distributed random variables each with mean $0$, so the diﬀerence will be normal with mean $0$. It remains to check that the normal random variable has the correct variance.

$\begin{array}{llll}\hfill & 𝔼\left[{\left({W}_{\text{inv}}\left(t\right)-{W}_{\text{inv}}\left(s\right)\right)}^{2}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=𝔼\left[{\left(sW\left(1∕s\right)-tW\left(1∕t\right)\right)}^{2}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}=𝔼\left[\left(sW\left(1∕s\right)-sW\left(1∕t\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}+sW\left(1∕t\right)-tW\left(1∕t\right)-\left(s-t\right)W\left(0\right){}^{\right)}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}={s}^{2}𝔼\left[{\left(W\left(1∕s\right)-W\left(1∕t\right)\right)}^{2}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}+s\left(s-t\right)𝔼\left[\left(W\left(1∕s\right)-W\left(1∕t\right)\right)\left(W\left(1∕t\right)-W\left(0\right)\right)\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}+{\left(s-t\right)}^{2}𝔼\left[{\left(W\left(1∕t\right)-W\left(0\right)\right)}^{2}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \phantom{\rule{2em}{0ex}}& ={s}^{2}𝔼\left[{\left(W\left(1∕s\right)-W\left(1∕t\right)\right)}^{2}\right]+{\left(s-t\right)}^{2}𝔼\left[{\left(W\left(1∕t\right)-W\left(0\right)\right)}^{2}\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \phantom{\rule{2em}{0ex}}& ={s}^{2}\left(1∕s-1∕t\right)+{\left(s-t\right)}^{2}\left(1∕t\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill \phantom{\rule{2em}{0ex}}& =t-s.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Note the use of independence of $W\left(1∕s\right)-W\left(1∕t\right)$ from $W\left(1∕t\right)-W\left(0\right)$ at the third equality.

2. It is hard to show the independence of increments directly. Instead rely on the fact that a Gaussian process with mean $0$ and covariance function $min\left(s,t\right)$ is a Wiener process, and thus prove it indirectly.

Note that

$Cov\left[{W}_{\text{inv}}\left(s\right),{W}_{\text{inv}}\left(t\right),=\right]stmin\left(1∕s,1∕t\right)=min\left(s,t\right).$

3. By deﬁnition, ${W}_{\text{inv}}\left(0\right)=0$.
4. The argument that $\underset{t\to 0}{lim}{W}_{\text{inv}}\left(t\right)=0$ is equivalent to showing that $\underset{t\to \infty }{lim}W\left(t\right)∕t=0$. To show this requires use of Kolmogorov’s inequality for the Wiener process and clever use of the Borel-Cantelli lemma and is beyond the scope of this course. Use the translation property in the third statement of this theorem to prove continuity at every value of $t$.

The following comments are adapted from Stochastic Calculus and Financial Applications by J. Michael Steele. Springer, New York, 2001, page 40. These laws tie the Wiener process to three important groups of transformations on $\left[0,\infty \right)$, and a basic lesson from the theory of diﬀerential equations is that such symmetries can be extremely useful. On a second level, the laws also capture the somewhat magical fractal nature of the Wiener process. The scaling law tells us that if we had even one-billionth of a second of a Wiener process path, we could expand it to a billions years’ worth of an equally valid Wiener process path! The translation symmetry is not quite so startling, it merely says that Wiener process can be restarted anywhere. That is, any part of a Wiener process captures the same behavior as at the origin. The inversion law is perhaps most impressive, it tells us that the ﬁrst second of the life of a Wiener process path is rich enough to capture the behavior of a Wiener process path from the end of the ﬁrst second until the end of time.

_______________________________________________________________________________________________

### Algorithms, Scripts, Simulations

#### Algorithm

Let constants $h$ and $c$ be given. Create an approximation $Ŵ\left(t\right)$ of Wiener process $W\left(t\right)$ on a domain $\left[0,T\right]$ large enough to accommodate $Ŵ\left(1+h\right)$ and $Ŵ\left(1∕{c}^{2}\right)$. Using the approximation $Ŵ\left(t\right)$ create approximations to ${W}_{\text{shift}}\left(t\right)$ and ${W}_{\text{scale}}\left(t\right)$ with functions $Ŵ\left(t+h\right)-Ŵ\left(h\right)$ and $cŴ\left(t∕{c}^{2}\right)$. Plot all functions on the same set of axes.

#### Scripts

Geogebra
R
1p <- 0.5
2N <- 400
3
4T <- 2
5h <- 0.25
6c <- 2.0
7
8S <- array(0, c(N+1))
9rw <- cumsum( 2 * ( runif(N) <= p)-1 )
10S[2:(N+1)] <- rw
11
12WcaretN <- function(x) {
13    Delta <- T/N
14
15    # add 1 since arrays are 1-based
16    prior = floor(x/Delta) + 1
17    subsequent = ceiling(x/Delta) + 1
18
19    retval <- sqrt(Delta)*(S[prior] + ((x/Delta+1) - prior)*(S[subsequent] - S[prior]))
20}
21
22Wshift <- function(x) {
23    retval <- WcaretN(x+h) - WcaretN(h)
24}
25
26Wscale <- function(x) {
27    retval <- c*WcaretN(x/c^2)
28}
29
30curve(WcaretN, 0,1, n=400, col = "black")
31curve(Wshift, 0,1, n=400, add = TRUE, col = "blue")
32curve(Wscale, 0,1, n= 400, add = TRUE, col = "red")
Octave
1p = 0.5;
2
3global N = 400;
4global T = 2;
5
6global h = 0.25;
7global c = 2.0;
8
9global S
10S = zeros(N+1, 1);
11S(2:N+1) = cumsum( 2 * (rand(N,1)<=p) - 1);
12
13function retval = WcaretN(x)
14  global N;
15  global T;
16  global S;
17  Delta = T/N;
18
19  # add 1 since arrays are 1-based
20  prior = floor(x/Delta) + 1;
21  subsequent = ceil(x/Delta) + 1;
22
23  retval = sqrt(Delta)*(S(prior) + ((x/Delta+1) - prior).*(S(subsequent)-S(prior)));
24
25endfunction
26
27function retval = Wshift(x)
28  global h;
29  retval = WcaretN(x + h) - WcaretN(h);
30endfunction
31
32function retval = Wscale(x)
33  global c;
34  retval = c*WcaretN(x/c^2);
35endfunction
36
37fplot("[WcaretN(x), Wshift(x), Wscale(x)]", [0,1])
Perl
1use PDL::NiceSlice;
2
3$p = 0.5; 4 5$N = 400;
6$T = 2; 7$h = 0.25;
8$c = 2.0; 9 10# the random walk 11$S = zeros( $N + 1 ); 12$S ( 1 : $N ) .= cumusumover( 2 * ( random($N) <= $p ) - 1 ); 13 14# function WcaretN interpolating random walk 15sub WcaretN { 16 my$x = shift @_;
17    $Delta =$T / $N; 18 19$prior      = floor( $x /$Delta );
20    $subsequent = ceil($x / $Delta ); 21 22$retval =
23        sqrt($Delta) 24 * ($S ($prior) 25 + ( ($x / $Delta ) -$prior )
26            * ( $S ($subsequent) - $S ($prior) ) );
27}
28
29sub Wshift {
30    my $x = shift @_; 31$retval = WcaretN( $x +$h ) - WcaretN($h); 32} 33 34sub Wscale { 35 my$x = shift @_;
36    $retval =$c * WcaretN( $x / ($c * $c ) ); 37} 38 39# file output to use with external plotting programming 40# such as gnuplot 41# Start gnuplot, then from gnuplot prompt 42# plot "wienerprocess.dat" using 1:2 with lines title WcaretN’, "wienerprocess.dat" using 1:3 with lines title Wshift’, "wienerprocess.dat" using 1:4 with lines title Wscale 43$M     = 300;
44$tgrid = zeros($M + 1 )->xlinvals( 0, 1 );
45$W = WcaretN($tgrid);
46$Wsh = Wshift($tgrid);
47$Wsc = Wscale($tgrid);
48
49open( F, ">wienerprocess.dat" ) || die "cannot write: $! "; 50foreach$j ( 0 .. $M ) { 51 print F$tgrid->range( [$j] ), " ",$W->range(   [$j] ), " ", 52$Wsh->range(       [$j] ), " ",$Wsc->range( [\$j] ), "\n";
53}
54close(F);
SciPy
1
2import scipy
3
4p = 0.5
5
6N = 400
7T = 2.
8h = 0.25
9c = 2.0
10
11# the random walk
12S = scipy.zeros(N+1)
13S[1:N+1] = scipy.cumsum( 2*( scipy.random.random(N) <= p ) - 1 )
14
15def WcaretN(x):
16    Delta = T/N
17    prior = scipy.floor(x/Delta).astype(int)
18    subsequent = scipy.ceil(x/Delta).astype(int)
19    return scipy.sqrt(Delta)*(S[prior] + (x/Delta - prior)*(S[subsequent] - S[prior]))
20
21def Wshift(x):
22    return WcaretN(x + h) - WcaretN(h)
23
24def Wscale(x):
25    return c*WcaretN(x/c**2.)
26
27M = 300
28tgrid = scipy.linspace(0, 1, M+1)
29W = WcaretN(tgrid)
30Wsh = Wshift(tgrid)
31Wsc = Wscale(tgrid)
32
33# optional file output to use with external plotting programming
34# such as gnuplot, R, octave, etc.
35# Start gnuplot, then from gnuplot prompt
36#  plot "wienerprocess.dat" using 1:2 with lines title WcaretN’, "wienerprocess.dat" using 1:3 with lines title Wshift’, "wienerprocess.dat" using 1:4 with lines title Wscale
37f = open(wienerprocess.dat, w)
38for j in range(0,M+1):
39    f.write( str(tgrid[j])+ +str(W[j])+ +str(Wsh[j])+ +str(Wsc[j])+\n);
40
41f.close()

__________________________________________________________________________

#### Sources

This section is adapted from: A First Course in Stochastic Processes by S. Karlin, and H. Taylor, Academic Press, 1975, pages 351–353 and Financial Derivatives in Theory and Practice by P. J. Hunt and J. E. Kennedy, John Wiley and Sons, 2000, pages 23–24.

_______________________________________________________________________________________________

### Problems to Work for Understanding

1. Explain why there is no script for simulation or approximation of the inversion transformation of the Wiener process, or if possible provide such a script.
2. Given the piecewise linear approximation ${Ŵ}_{N}\left(t\right)$, what are the slopes of the piecewise linear segments of the scaling transformation $c{Ŵ}_{N}\left(t∕{c}^{2}\right)$ ?
3. Modify the scripts to plot an approximation of ${W}_{\text{scale}}\left(t\right)$ on $\left[0,1\right]$ with the same degree of approximation as ${Ŵ}_{N}\left(t\right)$ for some $N$. Plot both on the same set of axes.
4. Show that $stmin\left(1∕s,1∕t\right)=min\left(s,t\right)$

__________________________________________________________________________

### References

[1]   P. J. Hunt and J. E. Kennedy. Financial Derivatives in Theory and Practice. John Wiley and Sons, 2000. HG 6024 A3H86 2000.

[2]   S. Karlin and H. Taylor. A Second Course in Stochastic Processes. Academic Press, 1981.

[3]   J. Michael Steele. Stochastic Calculus and Financial Applications. Springer-Verlag, 2001. QA 274.2 S 74.

__________________________________________________________________________

1. Russell Gerrard, City University, London, Stochastic Modeling . Notes for the MSc in Actuarial Science, 2003-2004. Contributed by S. Dunbar October 30, 2005.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.