Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and

__________________________________________________________________________

Stochastic Diﬀerential Equations and the Euler-Maruyama Method

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

### Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

### Section Starter Question

Explain how to use a slope-ﬁeld diagram to solve the ordinary diﬀerential equation

$\frac{\phantom{\rule{0.3em}{0ex}}dx}{\phantom{\rule{0.3em}{0ex}}dt}=x.$

How would you turn that process into an algorithm to numerically compute an approximate solution without a diagram?

_______________________________________________________________________________________________

### Key Concepts

1. We can numerically simulate the solution to stochastic diﬀerential equations with an analog to Euler’s method, called the Euler-Maruyama (EM) method.

__________________________________________________________________________

### Vocabulary

1. A stochastic diﬀerential equation is a mathematical equation relating a stochastic process to its local deterministic and random components. The goal is to extend the relation to ﬁnd the stochastic process. Under mild conditions on the relationship, and with a specifying initial condition, solutions of stochastic diﬀerential equations exist and are unique.
2. The Euler-Maruyama (EM) method is a numerical method for simulating the solutions of a stochastic diﬀerential equation based on the deﬁnition of the Itô stochastic integral: Given
$\phantom{\rule{0.3em}{0ex}}dX\left(t\right)=G\left(X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt+H\left(X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dW\left(t\right),\phantom{\rule{2em}{0ex}}X\left({t}_{0}\right)={X}_{0},$

and a step size $\phantom{\rule{0.3em}{0ex}}dt$, we approximate and simulate with

${X}_{j}={X}_{j-1}+G\left({X}_{j-1}\right)\phantom{\rule{0.3em}{0ex}}dt+H\left({X}_{j-1}\right)\left(W\left({t}_{j-1}+\phantom{\rule{0.3em}{0ex}}dt\right)-W\left({t}_{j-1}\right)\right)$

3. Extensions and variants of standard Brownian motion deﬁned through stochastic diﬀerential equations are Brownian motion with drift, scaled Brownian motion, and geometric Brownian motion.

__________________________________________________________________________

### Mathematical Ideas

#### Stochastic Diﬀerential Equations: Symbolically

The straight line segment is the building block of diﬀerential calculus. The basic idea behind diﬀerential calculus is that diﬀerentiable functions, no matter how diﬃcult their global behavior, are locally approximated by straight line segments. In particular, this is the idea behind Euler’s method for approximating diﬀerentiable functions deﬁned by diﬀerential equations.

We know that rescaling (“zooming in” on) Brownian motion does not produce a straight line, it produces another image of Brownian motion. This self-similarity is ideal for an inﬁnitesimal building block, for instance, we could build global Brownian motion out of lots of local “chunks” of Brownian motion. This suggests we could build other stochastic processes out of suitably scaled Brownian motion. In addition, if we include straight line segments we can overlay the behavior of diﬀerentiable functions onto the stochastic processes as well. Thus, straight line segments and “chunks” of Brownian motion are the building blocks of stochastic calculus.

With stochastic diﬀerential calculus, we can build new stochastic processes. We do this by specifying how to build the new stochastic processes locally from our base deterministic function, the straight line and our base stochastic process, standard Brownian motion. We write the local change in value of the stochastic process over a time interval of (inﬁnitesimal) length $\phantom{\rule{0.3em}{0ex}}dt$ as

 $\phantom{\rule{0.3em}{0ex}}dX=G\left(X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt+H\left(X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dW\left(t\right),\phantom{\rule{2em}{0ex}}X\left({t}_{0}\right)={X}_{0}.$ (1)

Note that we are not allowed to write

$\frac{\phantom{\rule{0.3em}{0ex}}dX}{\phantom{\rule{0.3em}{0ex}}dt}=G\left(X\left(t\right)\right)+H\left(X\left(t\right)\right)\frac{\phantom{\rule{0.3em}{0ex}}dW}{\phantom{\rule{0.3em}{0ex}}dt},\phantom{\rule{2em}{0ex}}X\left({t}_{0}\right)={X}_{0}$

since standard Brownian motion is nowhere diﬀerentiable with probability 1. Actually, the informal stochastic diﬀerential equation (1) is a compact way of writing a rigorously deﬁned, equivalent implicit Itô integral equation. Since we do not have the required rigor, we will approach the stochastic diﬀerential equation intuitively.

The stochastic diﬀerential equation says the initial point $\left({t}_{0},{X}_{0}\right)$ is speciﬁed, perhaps with ${X}_{0}$ a random variable with a given distribution. A deterministic component at each point has a slope determined through $G$ at that point. In addition, some random perturbation aﬀects the evolution of the process. The random perturbation is normally distributed with mean $0$. The variance of the random perturbation is ${\left(H\left(X\left(t\right)\right)\right)}^{2}$ at $\left(t,X\left(t\right)\right)$. This is a simple expression of a Stochastic Diﬀerential Equation (SDE) which determines a stochastic process, just as an Ordinary Diﬀerential Equation (ODE) determines a diﬀerentiable function. We extend the process with the incremental change information and repeat. This is an expression in words of the Euler-Maruyama method for numerically simulating the stochastic diﬀerential expression.

Example. A very simple stochastic diﬀerential equation is

$\phantom{\rule{0.3em}{0ex}}dX=r\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt+\phantom{\rule{0.3em}{0ex}}dW,\phantom{\rule{2em}{0ex}}X\left(0\right)=b$

with $r$ a constant. Take a deterministic initial condition to be $X\left(0\right)=b$. The new process is the stochastic extension of the diﬀerential equation expression of a straight line. The new stochastic process $X$ is drifting or trending at constant rate $r$ with a random variation due to Brownian motion perturbations around that trend. We will later show explicitly that the solution of this SDE is $X\left(t\right)=b+rt+W\left(t\right)$ although it is seems intuitively clear that this should be the process. We will call this Brownian motion with drift.

Example. Another very simple stochastic diﬀerential equation is

$\phantom{\rule{0.3em}{0ex}}dX=\sigma \phantom{\rule{0.3em}{0ex}}dW,\phantom{\rule{2em}{0ex}}X\left(0\right)=b$

This stochastic diﬀerential equation says that the process is evolving as a multiple of standard Brownian motion. The solution may be easily guessed as $X\left(t\right)=\sigma W\left(t\right)$ which has variance ${\sigma }^{2}t$ on increments of length $t$. Sometimes the new process is called Brownian motion (in contrast to standard Brownian motion which has variance $t$ on increments of length $t$).

We combine the previous two examples to consider

$\phantom{\rule{0.3em}{0ex}}dX=r\phantom{\rule{0.3em}{0ex}}dt+\sigma \phantom{\rule{0.3em}{0ex}}dW,\phantom{\rule{2em}{0ex}}X\left(0\right)=b$

which has solution $X\left(t\right)=b+rt+\sigma W\left(t\right)$, a multiple of Brownian motion with drift $r$ started at $b$. Sometimes this extension of standard Brownian motion is called Brownian motion. Some authors consider this process directly instead of the more special case we considered in the previous chapter.

Example. The next simplest and ﬁrst non-trivial diﬀerential equation is

$\phantom{\rule{0.3em}{0ex}}dX=X\phantom{\rule{2em}{0ex}}\phantom{\rule{0.3em}{0ex}}dW.$

Here the diﬀerential equation says that process is evolving like Brownian motion with a variance which is the square of the process value. When the process is small, the variance is small, when the process is large, the variance is large. Expressing the stochastic diﬀerential equation as $\phantom{\rule{0.3em}{0ex}}dX∕X=\phantom{\rule{0.3em}{0ex}}dW$ we may say that the relative change acts like standard Brownian motion. The resulting stochastic process is called geometric Brownian motion and it will ﬁgure extensively later as a model of security prices.

Example. The next simplest diﬀerential equation is

$\phantom{\rule{0.3em}{0ex}}dX=rX\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt+\sigma X\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dW,\phantom{\rule{2em}{0ex}}X\left(0\right)=b.$

Here the stochastic diﬀerential equation says that the growth of the process at a point is proportional to the process value, with a random perturbation proportional to the process value. Again looking ahead, we could write the diﬀerential equation as $\phantom{\rule{0.3em}{0ex}}dX∕X=r\phantom{\rule{0.3em}{0ex}}dt+\sigma \phantom{\rule{0.3em}{0ex}}dW$ and interpret it to say the relative rate of increase is proportional to the time observed together with a random perturbation like a Brownian increment corresponding to the length of time. We will show later that the analytic expression for the stochastic process deﬁned by this SDE is $bexp\left(\left(r-\frac{1}{2}{\sigma }^{2}\right)t+\sigma W\left(t\right)\right)$.

#### Stochastic Diﬀerential Equations: Numerically

The sample path that the Euler-Maruyama method produces numerically is the analog of using the Euler method.

The formula for the Euler-Maruyama (EM) method is based on the deﬁnition of the Itô stochastic integral:

$\begin{array}{llll}\hfill {X}_{j}& ={X}_{j-1}+G\left({X}_{j-1}\right)\phantom{\rule{0.3em}{0ex}}dt+H\left({X}_{j-1}\right)\left(W\left({t}_{j-1}+\phantom{\rule{0.3em}{0ex}}dt\right)-W\left({t}_{j-1}\right)\right),\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill {t}_{j}& ={t}_{j-1}+\phantom{\rule{0.3em}{0ex}}dt.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Note that the initial conditions ${X}_{0}$ and ${t}_{0}$ set the starting point.

In this text, we use coin-ﬂipping sequences of an appropriate length scaled to create an approximation to $W\left(t\right)$ just as in the section Approximation to Brownian Motion.. The coin-ﬂipping sequences emphasize the discrete nature of the simulations with an easily constructed random process. This is consistent with the approach of this text which always uses coin-ﬂipping sequences to create random processes. Note that since the increments $W\left({t}_{j-1}+\phantom{\rule{0.3em}{0ex}}dt\right)-W\left({t}_{j-1}\right)$ are independent and identically distributed, we will use independent coin-ﬂip sequences to generate the approximation of the increments. The EM method could use independent normal random variates directly to obtain the increments $W\left({t}_{j-1}+\phantom{\rule{0.3em}{0ex}}dt\right)-W\left({t}_{j-1}\right)$. Using independent normal random variates directly would be easier and more eﬃcient. The exercises modify the example scripts to use independent normal random variates directly.

Then

$\begin{array}{c}dW=W\left({t}_{j-1}+\phantom{\rule{0.3em}{0ex}}dt\right)-W\left({t}_{j-1}\right)=W\left(\phantom{\rule{0.3em}{0ex}}dt\right)\\ \approx {Ŵ}_{N}\left(\phantom{\rule{0.3em}{0ex}}dt\right)=\frac{Ŵ\left(N\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt\right)}{\sqrt{N}}=\sqrt{\phantom{\rule{0.3em}{0ex}}dt}\frac{Ŵ\left(N\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt\right)}{\sqrt{N\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt}}.\end{array}$

The ﬁrst equality above is the deﬁnition of an increment, the second equality means the random variables $W\left({t}_{j-1}+\phantom{\rule{0.3em}{0ex}}dt\right)-W\left({t}_{j-1}\right)$ and $W\left(\phantom{\rule{0.3em}{0ex}}dt\right)$ have the same distribution because of the deﬁnition of standard Brownian motion which speciﬁes that increments with equal length are normally distributed with variance equal to the increment length. The approximate equality occurs because of the approximation of Brownian motion by coin-ﬂipping sequences. We generate the approximations using a random number generator, but we could as well use actual coin-ﬂipping. In Table 1 the generation of the sequences is not recorded, only the summed and scaled (independently sampled) outcomes. For convenience, take $\phantom{\rule{0.3em}{0ex}}dt=1∕10$, $N=100$, so we need $Ŵ\left(100\cdot \left(1∕10\right)\right)∕\sqrt{100}={T}_{10}∕10$. Then to obtain the entries in the column labeled $dW$ in the table we ﬂip a coin $10$ times and record ${T}_{10}∕10$. Take $r=2$, $b=1$, and $\sigma =1$, so we simulate the solution of

$\phantom{\rule{0.3em}{0ex}}dX=2X\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt+X\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dW,\phantom{\rule{2em}{0ex}}X\left(0\right)=1.$

A computer program can produce such a table with the step size made much smaller, presumably resulting in better approximation properties. In fact, it is possible to consider kinds of convergence for the EM method comparable to the Strong Law of Large Numbers and the Weak Law of Large Numbers. See the Problems for examples.

#### Discussion

The numerical approximation procedure using coin-ﬂipping makes it clear that the Euler-Maruyama method generates a random process. The value of the process depends on the time value and the coin-ﬂip sequence. Each generation of an approximation will be diﬀerent because the coin-ﬂip sequence is diﬀerent. The Euler-Maruyama method generates a stochastic process path approximation. To derive distributions and statistics about the process requires generating multiple paths, see the Problems for examples.

This shows that stochastic diﬀerential equations provide a way to deﬁne new stochastic processes. This is analogous to the notion that ordinary diﬀerential equations deﬁne new functions to study and use. In fact, one approach to developing calculus and the analysis of functions is to start with diﬀerential equations, use the Euler method to deﬁne approximations of solutions, and then to develop a theory to handle the passage to continuous variables. This approach is especially useful for a mathematical modeling viewpoint since the model often uses diﬀerential equations.

This text follows the approach of starting with stochastic diﬀerential equations to describe a situation and numerically deﬁning new stochastic processes to model the situation. At certain points, we appeal to more rigorous mathematical theory to justify the modeling and approximation. One important justiﬁcation asserts that if we write a stochastic diﬀerential equation, then solutions exist and the stochastic diﬀerential equation always yields the same process under equivalent conditions. The Existence-Uniqueness Theorem shows that under reasonable modeling conditions stochastic diﬀerential equations do indeed satisfy this requirement.

Theorem 1 (Existence-Uniqueness). For the stochastic diﬀerential equation

$\phantom{\rule{0.3em}{0ex}}dX=G\left(t,X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dt+H\left(t,X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}dW\left(t\right),\phantom{\rule{2em}{0ex}}X\left({t}_{0}\right)={X}_{0}$

assume

1. Both $G\left(t,x\right)$ and $H\left(t,x\right)$ are continuous on $\left(t,x\right)\in \left[{t}_{0},T\right]×ℝ$.
2. The coeﬃcient functions $G$ and $H$ satisfy a Lipschitz condition:
$|G\left(t,x\right)-G\left(t,y\right)|+|H\left(t,x\right)-H\left(t,y\right)|\le K|x-y|.$

3. The coeﬃcient functions $G$ and $H$ satisfy a growth condition in the second variable
$|G\left(t,x\right){|}^{2}+|H\left(t,x\right){|}^{2}\le K\left(1+|x{|}^{2}\right)$

for all $t\in \left[{t}_{0},T\right]$ and $x\in ℝ$.

Then the stochastic diﬀerential equation has a strong solution on $\left[{t}_{0},T\right]$ that is continuous with probability $1$ and

$\underset{t\in \left[{t}_{0},T\right]}{sup}𝔼\left[{X}^{2}\left(t\right)\right]<\infty$

and for each given Wiener process $W\left(t\right)$, the corresponding strong solutions are pathwise unique which means that if $X$ and $Y$ are two strong solutions, then

$ℙ\left[\underset{t\in \left[{t}_{0},T\right]}{sup}|X\left(t\right)-Y\left(t\right)|=0\right]=1.$

See [4] for a precise deﬁnition of “strong solution” but essentially it means that for each given Wiener process $W\left(t\right)$ we can generate a solution to the SDE. Note that the coeﬃcient functions here are two-variable functions of both time $t$ and location $x$, which is more general than the functions considered in equation (1). The restrictions on the functions $G\left(t,x\right)$ and $H\left(t,x\right)$, especially the continuity condition, can be considerably relaxed and the theorem will still remain true.

#### Sources

This section is adapted from: “An Algorithmic Introduction to the Numerical Simulation of Stochastic Diﬀerential Equations”, by Desmond J. Higham, in SIAM Review, Vol. 43, No. 3, pp. 525-546, 2001 and Financial Calculus: An introduction to derivative pricing by M. Baxter, and A. Rennie, Cambridge University Press, 1996, pages 52-62. The Existence-Uniqueness Theorem is adapted from An Introduction to Stochastic Processes with Applications to Biology, by L. J. S. Allen, Pearson Prentice-Hall, 2003, pages 342-343 and Numerical Solution of Stochastic Diﬀerential Equations, by Peter Kloeden and Eckhard Platen, Springer Verlag, 1992, pages 127-131.

_______________________________________________________________________________________________

### Algorithms, Scripts, Simulations

#### Algorithm

The scripts apply the EM method to simulate the solution of

$\phantom{\rule{0.3em}{0ex}}dX=rX\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt+\sigma X\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dW,\phantom{\rule{2em}{0ex}}X\left(0\right)=b.$

The parameters $N$ for the number of steps in the EM method, $T$ for the ending time, and stochastic diﬀerential equation parameters $r$ and $s$ are set. Find the time step and initialize the arrays holding the time steps and the solution simulation.

Using $M=30N$ create a piecewise linear function ${Ŵ}_{M}\left(t\right)$ using the approximation scripts in Approximation to Brownian Motion.. The parameter $30$ is chosen according to the rule of thumb in the DeMoivre-Laplace Central Limit Theorem, Central Limit Theorem.. Then each time increment will have $30$ coin ﬂips, suﬃcient according to the rule of thumb to guarantee a scaled sum which is appropriately normally distributed.

Then loop over the number of steps using the EM algorithm

${X}_{j}={X}_{j-1}+G\left({X}_{j-1}\right)\phantom{\rule{0.3em}{0ex}}dt+H\left({X}_{j-1}\right)\left(W\left({t}_{j-1}+\phantom{\rule{0.3em}{0ex}}dt\right)-W\left({t}_{j-1}\right)\right)$

and plot the resulting simulation.

#### Scripts

Geogebra
R
1r <- -1                                 # growth/decay rate
2sigma <- 0.5                            # relative standard deviation
3b <- 3                                  # initial condition
4
5M <- 100                                # number of steps for EM method to take
6T <- 1                                  # maximum time
7h <- T/M                                # time step
8t <- seq(length=M+1, from=0, by= h)     # t is the vector [0 1h 2h 3h ... Nh]
9X <- array(0, c(M+1))                   # place to store locations
10
11N <- 30*(M+1)                           # number of steps for the Brownian Motion approx
12
13p <- 0.5
14S <- array(0, c(N+1))
15rw <- cumsum( 2 *(runif(N) <= p) -1 )
16S[2:(N+1)] <- rw
17
18WcaretN <- function(z) {
19    Delta <- T/N
20
21    # add 1 since arrays are 1-based
22    prior = floor(z/Delta) + 1
23    subsequent = ceiling(z/Delta) + 1
24
25    retval <- sqrt(Delta)*(S[prior] + ((z/Delta+1) - prior)*(S[subsequent] - S[prior]))
26}
27
28X[1] <- b
29for (i in 1:M) {
30   X[i+1] <- X[i]+r*X[i]*h+sigma*X[i]*(WcaretN(t[i]+h)-WcaretN(t[i]))
31 }
32
33plot(t,X,"l", xlim=c(0, T), ylim=c(X[1]-exp(abs(r)*T+1), X[1]+exp(abs(r)*T+1)))
34title(main=paste("r = ", r, "sigma = ", sigma, "steps =", M))
35
Octave
1r = -1;    # growth/decay rate
2sigma = 0.5;     # \sigma
3b = 3;     # initial value
4
5M=100;           # number of steps for EM method to take
6global T=1;                # maximum time
7h=T/M;           # time step
8t=(0:h:T);       # t is the vector [0 1h 2h 3h ... Nh]
9X=zeros(size(t));# prepare place to store locations
10
11global N = 30*(M+1);    # number of steps for the Brownian
12        # Motion approximation
13global S;
14p = 1/2;
15S = zeros(N+1, 1);
16S(2:N+1) = cumsum( 2 * (rand(N,1)<=p) - 1);
17
18function retval = WcaretN(z)
19  global N;
20  global T;
21  global S;
22  Delta = T/N;
23
24  # add 1 since arrays are 1-based
25  prior = floor(z/Delta) + 1;
26  subsequent = ceil(z/Delta) + 1;
27
28  retval = sqrt(Delta)*(S(prior) + ((z/Delta+1) - prior).*(S(subsequent)-S(prior)));
29
30endfunction
31
32X(1)=b;             # initial height at t = 0
33for i=1:M           # start taking steps
34  X(i+1)=X(i)+r*X(i)*h+sigma*X(i)*( WcaretN(t(i)+h)-WcaretN(t(i)) );
35end;
36
37plot(t,X)  # plot more permanently
38axis([0 T X(1)-exp(abs(r)*T+1) X(1)+exp(abs(r)*T+1)]);   # set axis limits
39grid on;
40title(["r = ",  num2str(r), ", sigma = ", num2str(sigma), ", steps = ", num2str(M)] );
41
Perl
1##!/usr/bin/pdl
2use PDL::NiceSlice;
3
4$r = -1; # growth/decay rate 5$sigma = 0.5;     # standard deviation
6$b = 3; # initial value 7 8$M = 100;     # number of steps for EM method to take
9$T = 1; # maximum time 10$h = $T/$M;     # time step
11$t = zeros($M + 1 )->xlinvals( 0, $T ); # vector of [0, 1h, 2h, 3h...Mh] 12$X = zeros( $M + 1 ); 13 14$N = 30*($M+1); # number of steps for the Brownian Motion 15 16$p = 0.5;
17$S = zeros($N + 1 );   # the random walk
18$S ( 1 :$N ) .= cumusumover( 2 * ( random($N) <=$p ) - 1 );
19
20# function WcaretN interpolating random walk
21sub WcaretN {
22    my $x = shift @_; 23$Delta = $T /$N;
24
25    $prior = floor($x / $Delta ); 26$subsequent = ceil( $x /$Delta );
27
28    $retval = 29 sqrt($Delta)
30        * ( $S ($prior)
31            + ( ( $x /$Delta ) - $prior ) 32 * ($S ($subsequent) -$S ($prior) ) ); 33} 34 35$X(0) .= $b; # initial value at t = 0 36for ($i=0; $i <=$M-1; $i++) { # start taking steps 37$X($i+1) .=$X($i)+$r*$X($i)*$h+$sigma*$X($i)*( WcaretN($t($i)+$h)-WcaretN($t($i)) ); 38} 39 40# file output to use with external plotting programming 41# such as gnuplot, R, octave, etc. 42# Start gnuplot, then from gnuplot prompt 43# plot "stochasticdes.dat" with lines 44 45open( F, ">stochasticdes.dat" ) || die "cannot write:$! ";
46foreach $j ( 0 ..$M ) {
47    print F $t->range( [$j] ), " ", $X->range( [$j] ), "\n";
48}
49close(F);
50
SciPy
1import scipy
2
3r = -1.     # growth/decay rate
4sigma = 0.5   # standard deviation
5b = 3.      # initial value
6
7M = 100     # number of steps for EM method to take
8T = 1.      # maximum time, note type real
9h = T/M     # time step
10t = scipy.linspace(0,T,M+1) # vector of [0, 1h, 2h, 3h...Nh]
11X = scipy.zeros( M + 1 )
12
13N = 30*(M+1)    # number of steps for the Brownian Motion
14
15# the random walk
16p = 0.5
17S = scipy.zeros(N+1)
18S[1:N+1] = scipy.cumsum( 2*( scipy.random.random(N) <= p ) - 1 )
19
20def WcaretN(x):
21    Delta = T/N                 # T real coerces Delta real
22    prior = scipy.floor(x/Delta).astype(int)
23    subsequent = scipy.ceil(x/Delta).astype(int)
24    return scipy.sqrt(Delta)*(S[prior] + (x/Delta - prior)*(S[subsequent] - S[prior]))
25
26X[0] = b                        # iniital value at t = 0
27for i in range(0,M):
28    X[i+1] = X[i]+r*X[i]*h+sigma*X[i]*( WcaretN(t[i]+h)-WcaretN(t[i]) )
29
30# optional file output to use with external plotting programming
31# such as gnuplot, R, octave, etc.
32# Start gnuplot, then from gnuplot prompt
33#    plot "stochasticdes.dat" with lines
34f = open(stochasticdes.dat, w)
35for j in range(0,M+1):
36    f.write( str(t[j])+ +str(X[j])+\n);
37
38f.close()

__________________________________________________________________________

### Problems to Work for Understanding

1. Graph an approximation of a multiple of Brownian motion with drift with parameters $b=2$, $r=1∕2$ and $\sigma =2$ on the interval $\left[0,5\right]$ in two ways.
1. Flip a coin $25$ times, recording whether it comes up Heads or Tails each time, Scoring ${Y}_{i}=+1$ for each Heads and ${Y}_{i}=-1$ for each ﬂip, also keep track of the accumulated sum ${T}_{n}={\sum }_{i=1}^{n}{T}_{i}$ for $i=1\dots 25$. Using $N=5$ compute the rescaled approximation ${Ŵ}_{5}\left(t\right)=\left(1∕\sqrt{5}\right){T}_{5t}$ at the values $t=0,1∕5,2∕5,3∕5,\dots 24∕5,5$ on $\left[0,5\right]$. Finally compute and graph the value of $X\left(t\right)=b+rt+\sigma {Ŵ}_{5}\left(t\right)$.
2. Using the same values of ${Ŵ}_{5}\left(t\right)$ as approximations for $W\left(dt\right)$ compute the values of the solution of the stochastic diﬀerential equation $\phantom{\rule{0.3em}{0ex}}dX=r\phantom{\rule{0.3em}{0ex}}dt+\sigma \phantom{\rule{0.3em}{0ex}}dW,\phantom{\rule{1em}{0ex}}X\left(0\right)=b$ on the interval $\left[0,5\right]$.
2. Repeat the previous problem with parameters $b=2$, $r=-1∕2$ and $\sigma =2$.
3. Repeat the previous problem with parameters $b=2$, $r=1∕2$ and $\sigma =-2$.
4. Modify the scripts to use normally distributed random deivates and then simulate the solution of the stochastic diﬀerential equation
$\phantom{\rule{0.3em}{0ex}}dX\left(t\right)=X\left(t\right)\phantom{\rule{0.3em}{0ex}}dt+2X\left(t\right)\phantom{\rule{0.3em}{0ex}}dX$

on the interval $\left[0,1\right]$ with initial condition $X\left(0\right)=1$ and step size $\Delta t=1∕10$.

5. Modify the scripts to use normally distributed random deviates and then simulate the solution of the stochastic diﬀerential equation
$\phantom{\rule{0.3em}{0ex}}dX\left(t\right)=tX\left(t\right)\phantom{\rule{0.3em}{0ex}}dt+2X\left(t\right)\phantom{\rule{0.3em}{0ex}}dX$

on the interval $\left[0,1\right]$ with initial condition $X\left(0\right)=1$ and step size $\Delta t=1∕10$. Note the diﬀerence with the previous problem, now the multiplier of the $\phantom{\rule{0.3em}{0ex}}dt$ term is a function of time.

6. Modify the scripts to use general functions $G\left(X\right)$ and $H\left(x\right)$ and then apply it to the SDE for the
1. Ornstein-Uhlenbeck process:
$\phantom{\rule{0.3em}{0ex}}dX=𝜃\left(\mu -X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt+\sigma \phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dW\left(t\right),\phantom{\rule{2em}{0ex}}X\left({t}_{0}\right)={X}_{0}.$

2. Cox-Ingersoll-Ross process:
$\phantom{\rule{0.3em}{0ex}}dX=𝜃\left(\mu -X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt+\sigma \sqrt{\left(}X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dW\left(t\right),\phantom{\rule{2em}{0ex}}X\left({t}_{0}\right)={X}_{0}.$

3. the modiﬁed Cox-Ingersoll-Ross process:
$\phantom{\rule{0.3em}{0ex}}dX=-𝜃X\left(t\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dt+𝜃\sqrt{\left(}X\left(t\right)\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}dW\left(t\right),\phantom{\rule{2em}{0ex}}X\left({t}_{0}\right)={X}_{0}.$

7. Write a program with parameters $r$, $\sigma$, $b$, $T$ and $N$ (so $\phantom{\rule{0.3em}{0ex}}dt=T∕N$) that computes and graphs the approximation of the solution of the stochastic diﬀerential equation
$\phantom{\rule{0.3em}{0ex}}dX\left(t\right)=rX\left(t\right)\phantom{\rule{0.3em}{0ex}}dt+\sigma X\left(t\right)\phantom{\rule{0.3em}{0ex}}dX$

with $X\left(0\right)=b$ on the interval $\left[0,T\right]$. Apply the program to the stochastic diﬀerential equation with $r=2$, $\sigma =1$, $b=1$, and $N={2}^{6},{2}^{7},{2}^{8}$ on the interval $\left[0,1\right]$.

8. Generalize the program from the previous problem to include a parameter $M$ for the number of sample paths computed. Then using this program on the interval $\left[0,1\right]$ with $M=1000$, and $N={2}^{8}$ compute $𝔼\left[|{X}_{n}-X\left(1\right)|\right]$, where $X\left(1\right)=b{e}^{r-\frac{1}{2}{\sigma }^{2}+\sigma {Ŵ}_{{2}^{8}}\left(1\right)}$.
9. Using the program from the previous problem with $M=1000$ and $N={2}^{5},{2}^{6},{2}^{7},{2}^{8},{2}^{9}$ compute $𝔼\left[|{X}_{N}-X\left(1\right)|\right]$, where $X\left(1\right)=b{e}^{r-\frac{1}{2}{\sigma }^{2}+\sigma {Ŵ}_{{2}^{9}}\left(1\right)}$. Then for the $5$ values of $N$, make a $log$-$log$ plot of $𝔼\left[|{X}_{N}-X\left(1\right)|\right]$ on the vertical axis against $\Delta t=1∕N$ on the horizontal axis. Using the slope of the resulting best-ﬁt line experimentally determine the order of convergence $\gamma$ so that
$𝔼\left[|{X}_{N}-X\left(1\right)|\right]\le C{\left(\Delta t\right)}^{\gamma }.$

__________________________________________________________________________

### References

[1]   Linda J. S. Allen. An Introduction to Stochastic Processes with Applications to biology. Pearson Prentice-Hall, 2003.

[2]   M. Baxter and A. Rennie. Financial Calculus: An introduction to derivative pricing. Cambridge University Press, 1996. HG 6024 A2W554.

[3]   Desmond J. Higham. An algorithmic introduction to numerical simulation of stochastic diﬀerential equations. SIAM Review, 43(3):525–546, 2001.

[4]   P. Kloeden and E. Platen. Numerical Solution of Stochastic Diﬀerential Equations, volume 23 of Stochastic Modelling and Applied Probability. Springer, 1992.

__________________________________________________________________________

1. Matlab program ﬁles for Stochastic Diﬀerential Equations. oﬀers a number of MATLAB routines for stochastic diﬀerential equations.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.