Steven R. Dunbar
Department of Mathematics
203 Avery Hall
University of Nebraska-Lincoln
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and
Advanced Mathematical Finance

__________________________________________________________________________

Stochastic Differential Equations and the Euler-Maruyama Method

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

Rating

Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

Section Starter Question

Section Starter Question

Explain how to use a slope-field diagram to solve the ordinary differential equation

dx dt = x.

How would you turn that process into an algorithm to numerically compute an approximate solution without a diagram?

_______________________________________________________________________________________________

Key Concepts

Key Concepts

  1. We can numerically simulate the solution to stochastic differential equations with an analog to Euler’s method, called the Euler-Maruyama (EM) method.

__________________________________________________________________________

Vocabulary

Vocabulary

  1. A stochastic differential equation is a mathematical equation relating a stochastic process to its local deterministic and random components. The goal is to extend the relation to find the stochastic process. Under mild conditions on the relationship, and with a specifying initial condition, solutions of stochastic differential equations exist and are unique.
  2. The Euler-Maruyama (EM) method is a numerical method for simulating the solutions of a stochastic differential equation based on the definition of the Itô stochastic integral: Given
    dX(t) = G(X(t))dt + H(X(t))dW(t),X(t0) = X0,

    and a step size dt, we approximate and simulate with

    Xj = Xj1 + G(Xj1)dt + H(Xj1)(W(tj1 + dt) W(tj1))

  3. Extensions and variants of standard Brownian motion defined through stochastic differential equations are Brownian motion with drift, scaled Brownian motion, and geometric Brownian motion.

__________________________________________________________________________

Mathematical Ideas

Mathematical Ideas

Stochastic Differential Equations: Symbolically

The straight line segment is the building block of differential calculus. The basic idea behind differential calculus is that differentiable functions, no matter how difficult their global behavior, are locally approximated by straight line segments. In particular, this is the idea behind Euler’s method for approximating differentiable functions defined by differential equations.

We know that rescaling (“zooming in” on) Brownian motion does not produce a straight line, it produces another image of Brownian motion. This self-similarity is ideal for an infinitesimal building block, for instance, we could build global Brownian motion out of lots of local “chunks” of Brownian motion. This suggests we could build other stochastic processes out of suitably scaled Brownian motion. In addition, if we include straight line segments we can overlay the behavior of differentiable functions onto the stochastic processes as well. Thus, straight line segments and “chunks” of Brownian motion are the building blocks of stochastic calculus.

With stochastic differential calculus, we can build new stochastic processes. We do this by specifying how to build the new stochastic processes locally from our base deterministic function, the straight line and our base stochastic process, standard Brownian motion. We write the local change in value of the stochastic process over a time interval of (infinitesimal) length dt as

dX = G(X(t))dt + H(X(t))dW(t),X(t0) = X0. (1)

Note that we are not allowed to write

dX dt = G(X(t)) + H(X(t))dW dt ,X(t0) = X0

since standard Brownian motion is nowhere differentiable with probability 1. Actually, the informal stochastic differential equation (1) is a compact way of writing a rigorously defined, equivalent implicit Itô integral equation. Since we do not have the required rigor, we will approach the stochastic differential equation intuitively.

The stochastic differential equation says the initial point (t0,X0) is specified, perhaps with X0 a random variable with a given distribution. A deterministic component at each point has a slope determined through G at that point. In addition, some random perturbation affects the evolution of the process. The random perturbation is normally distributed with mean 0. The variance of the random perturbation is (H(X(t)))2 at (t,X(t)). This is a simple expression of a Stochastic Differential Equation (SDE) which determines a stochastic process, just as an Ordinary Differential Equation (ODE) determines a differentiable function. We extend the process with the incremental change information and repeat. This is an expression in words of the Euler-Maruyama method for numerically simulating the stochastic differential expression.

Example. A very simple stochastic differential equation is

dX = rdt + dW,X(0) = b

with r a constant. Take a deterministic initial condition to be X(0) = b. The new process is the stochastic extension of the differential equation expression of a straight line. The new stochastic process X is drifting or trending at constant rate r with a random variation due to Brownian motion perturbations around that trend. We will later show explicitly that the solution of this SDE is X(t) = b + rt + W(t) although it is seems intuitively clear that this should be the process. We will call this Brownian motion with drift.

Example. Another very simple stochastic differential equation is

dX = σdW,X(0) = b

This stochastic differential equation says that the process is evolving as a multiple of standard Brownian motion. The solution may be easily guessed as X(t) = σW(t) which has variance σ2t on increments of length t. Sometimes the new process is called Brownian motion (in contrast to standard Brownian motion which has variance t on increments of length t).

We combine the previous two examples to consider

dX = rdt + σdW,X(0) = b

which has solution X(t) = b + rt + σW(t), a multiple of Brownian motion with drift r started at b. Sometimes this extension of standard Brownian motion is called Brownian motion. Some authors consider this process directly instead of the more special case we considered in the previous chapter.

Example. The next simplest and first non-trivial differential equation is

dX = XdW.

Here the differential equation says that process is evolving like Brownian motion with a variance which is the square of the process value. When the process is small, the variance is small, when the process is large, the variance is large. Expressing the stochastic differential equation as dXX = dW we may say that the relative change acts like standard Brownian motion. The resulting stochastic process is called geometric Brownian motion and it will figure extensively later as a model of security prices.

Example. The next simplest differential equation is

dX = rXdt + σXdW,X(0) = b.

Here the stochastic differential equation says that the growth of the process at a point is proportional to the process value, with a random perturbation proportional to the process value. Again looking ahead, we could write the differential equation as dXX = rdt + σdW and interpret it to say the relative rate of increase is proportional to the time observed together with a random perturbation like a Brownian increment corresponding to the length of time. We will show later that the analytic expression for the stochastic process defined by this SDE is b exp((r 1 2σ2)t + σW(t)).

Stochastic Differential Equations: Numerically

The sample path that the Euler-Maruyama method produces numerically is the analog of using the Euler method.

The formula for the Euler-Maruyama (EM) method is based on the definition of the Itô stochastic integral:

Xj = Xj1 + G(Xj1)dt + H(Xj1)(W(tj1 + dt) W(tj1)), tj = tj1 + dt.

Note that the initial conditions X0 and t0 set the starting point.

In this text, we use coin-flipping sequences of an appropriate length scaled to create an approximation to W(t) just as in the section Approximation to Brownian Motion.. The coin-flipping sequences emphasize the discrete nature of the simulations with an easily constructed random process. This is consistent with the approach of this text which always uses coin-flipping sequences to create random processes. Note that since the increments W(tj1 + dt) W(tj1) are independent and identically distributed, we will use independent coin-flip sequences to generate the approximation of the increments. The EM method could use independent normal random variates directly to obtain the increments W(tj1 + dt) W(tj1). Using independent normal random variates directly would be easier and more efficient. The exercises modify the example scripts to use independent normal random variates directly.

Then

dW = W(tj1 + dt) W(tj1) = W(dt) ŴN(dt) = Ŵ(Ndt) N = dtŴ(Ndt) Ndt .

The first equality above is the definition of an increment, the second equality means the random variables W(tj1 + dt) W(tj1) and W(dt) have the same distribution because of the definition of standard Brownian motion which specifies that increments with equal length are normally distributed with variance equal to the increment length. The approximate equality occurs because of the approximation of Brownian motion by coin-flipping sequences. We generate the approximations using a random number generator, but we could as well use actual coin-flipping. In Table 1 the generation of the sequences is not recorded, only the summed and scaled (independently sampled) outcomes. For convenience, take dt = 110, N = 100, so we need Ŵ(100 (110))100 = T1010. Then to obtain the entries in the column labeled dW in the table we flip a coin 10 times and record T1010. Take r = 2, b = 1, and σ = 1, so we simulate the solution of

dX = 2Xdt + XdW,X(0) = 1.

A computer program can produce such a table with the step size made much smaller, presumably resulting in better approximation properties. In fact, it is possible to consider kinds of convergence for the EM method comparable to the Strong Law of Large Numbers and the Weak Law of Large Numbers. See the Problems for examples.


stochasticdes-1.png









jtjXj2XjdtdWXjdW2Xjdt+Xj+
XjdW2Xjdt+
XjdW








0 0 1 0.2 0 0 0.2 1.2
1 0.1 1.2 0.24 0.2 0.24 0.48 1.68
2 0.2 1.68 0.34 -0.2 -0.34 0.0 1.68
3 0.3 1.68 0.34 0.4 0.67 1.01 2.69
4 0.4 2.69 0.54 -0.2 -0.54 0.0 2.69
5 0.5 2.69 0.54 0 0 0.54 3.23
6 0.6 3.23 0.65 0.4 1.29 1.94 5.16
7 0.7 5.16 1.03 0.4 2.06 3.1 8.26
8 0.8 8.26 1.65 0.4 3.3 4.95 13.21
9 0.9 13.21 2.64 0 0 2.64 15.85
10 1.0 15.85









Figure 1: Simulation with the Euler-Maruyama method of a process defined by a stochastic differential equation.

Discussion

The numerical approximation procedure using coin-flipping makes it clear that the Euler-Maruyama method generates a random process. The value of the process depends on the time value and the coin-flip sequence. Each generation of an approximation will be different because the coin-flip sequence is different. The Euler-Maruyama method generates a stochastic process path approximation. To derive distributions and statistics about the process requires generating multiple paths, see the Problems for examples.

This shows that stochastic differential equations provide a way to define new stochastic processes. This is analogous to the notion that ordinary differential equations define new functions to study and use. In fact, one approach to developing calculus and the analysis of functions is to start with differential equations, use the Euler method to define approximations of solutions, and then to develop a theory to handle the passage to continuous variables. This approach is especially useful for a mathematical modeling viewpoint since the model often uses differential equations.

This text follows the approach of starting with stochastic differential equations to describe a situation and numerically defining new stochastic processes to model the situation. At certain points, we appeal to more rigorous mathematical theory to justify the modeling and approximation. One important justification asserts that if we write a stochastic differential equation, then solutions exist and the stochastic differential equation always yields the same process under equivalent conditions. The Existence-Uniqueness Theorem shows that under reasonable modeling conditions stochastic differential equations do indeed satisfy this requirement.

Theorem 1 (Existence-Uniqueness). For the stochastic differential equation

dX = G(t,X(t))dt + H(t,X(t))dW(t),X(t0) = X0

assume

  1. Both G(t,x) and H(t,x) are continuous on (t,x) [t0,T] × .
  2. The coefficient functions G and H satisfy a Lipschitz condition:
    |G(t,x) G(t,y)| + |H(t,x) H(t,y)| K|x y|.

  3. The coefficient functions G and H satisfy a growth condition in the second variable
    |G(t,x)|2 + |H(t,x)|2 K(1 + |x|2)

    for all t [t0,T] and x .

Then the stochastic differential equation has a strong solution on [t0,T] that is continuous with probability 1 and

sup t[t0,T]𝔼 X2(t) <

and for each given Wiener process W(t), the corresponding strong solutions are pathwise unique which means that if X and Y are two strong solutions, then

sup t[t0,T]|X(t) Y (t)| = 0 = 1.

See [4] for a precise definition of “strong solution” but essentially it means that for each given Wiener process W(t) we can generate a solution to the SDE. Note that the coefficient functions here are two-variable functions of both time t and location x, which is more general than the functions considered in equation (1). The restrictions on the functions G(t,x) and H(t,x), especially the continuity condition, can be considerably relaxed and the theorem will still remain true.

Sources

This section is adapted from: “An Algorithmic Introduction to the Numerical Simulation of Stochastic Differential Equations”, by Desmond J. Higham, in SIAM Review, Vol. 43, No. 3, pp. 525-546, 2001 and Financial Calculus: An introduction to derivative pricing by M. Baxter, and A. Rennie, Cambridge University Press, 1996, pages 52-62. The Existence-Uniqueness Theorem is adapted from An Introduction to Stochastic Processes with Applications to Biology, by L. J. S. Allen, Pearson Prentice-Hall, 2003, pages 342-343 and Numerical Solution of Stochastic Differential Equations, by Peter Kloeden and Eckhard Platen, Springer Verlag, 1992, pages 127-131.

_______________________________________________________________________________________________

Algorithms, Scripts, Simulations

Algorithms, Scripts, Simulations

Algorithm

The scripts apply the EM method to simulate the solution of

dX = rXdt + σXdW,X(0) = b.

The parameters N for the number of steps in the EM method, T for the ending time, and stochastic differential equation parameters r and s are set. Find the time step and initialize the arrays holding the time steps and the solution simulation.

Using M = 30N create a piecewise linear function ŴM(t) using the approximation scripts in Approximation to Brownian Motion.. The parameter 30 is chosen according to the rule of thumb in the DeMoivre-Laplace Central Limit Theorem, Central Limit Theorem.. Then each time increment will have 30 coin flips, sufficient according to the rule of thumb to guarantee a scaled sum which is appropriately normally distributed.

Then loop over the number of steps using the EM algorithm

Xj = Xj1 + G(Xj1)dt + H(Xj1)(W(tj1 + dt) W(tj1))

and plot the resulting simulation.

Scripts

Geogebra

GeoGebra.

R

R script for stochasticsdes.R .

1r <- -1                                 # growth/decay rate 
2sigma <- 0.5                            # relative standard deviation 
3b <- 3                                  # initial condition 
4 
5M <- 100                                # number of steps for EM method to take 
6T <- 1                                  # maximum time 
7h <- T/M                                # time step 
8t <- seq(length=M+1, from=0, by= h)     # t is the vector [0 1h 2h 3h ... Nh] 
9X <- array(0, c(M+1))                   # place to store locations 
10 
11N <- 30*(M+1)                           # number of steps for the Brownian Motion approx 
12 
13p <- 0.5 
14S <- array(0, c(N+1)) 
15rw <- cumsum( 2 *(runif(N) <= p) -1 ) 
16S[2:(N+1)] <- rw 
17 
18WcaretN <- function(z) { 
19    Delta <- T/N 
20 
21    # add 1 since arrays are 1-based 
22    prior = floor(z/Delta) + 1 
23    subsequent = ceiling(z/Delta) + 1 
24 
25    retval <- sqrt(Delta)*(S[prior] + ((z/Delta+1) - prior)*(S[subsequent] - S[prior])) 
26} 
27 
28X[1] <- b 
29for (i in 1:M) { 
30   X[i+1] <- X[i]+r*X[i]*h+sigma*X[i]*(WcaretN(t[i]+h)-WcaretN(t[i])) 
31 } 
32 
33plot(t,X,"l", xlim=c(0, T), ylim=c(X[1]-exp(abs(r)*T+1), X[1]+exp(abs(r)*T+1))) 
34title(main=paste("r = ", r, "sigma = ", sigma, "steps =", M)) 
35  
Octave

Octave script for stochasticdes.m.

1r = -1;    # growth/decay rate 
2sigma = 0.5;     # \sigma 
3b = 3;     # initial value 
4 
5M=100;           # number of steps for EM method to take 
6global T=1;                # maximum time 
7h=T/M;           # time step 
8t=(0:h:T);       # t is the vector [0 1h 2h 3h ... Nh] 
9X=zeros(size(t));# prepare place to store locations 
10 
11global N = 30*(M+1);    # number of steps for the Brownian 
12        # Motion approximation 
13global S; 
14p = 1/2; 
15S = zeros(N+1, 1); 
16S(2:N+1) = cumsum( 2 * (rand(N,1)<=p) - 1); 
17 
18function retval = WcaretN(z) 
19  global N; 
20  global T; 
21  global S; 
22  Delta = T/N; 
23 
24  # add 1 since arrays are 1-based 
25  prior = floor(z/Delta) + 1; 
26  subsequent = ceil(z/Delta) + 1; 
27 
28  retval = sqrt(Delta)*(S(prior) + ((z/Delta+1) - prior).*(S(subsequent)-S(prior))); 
29 
30endfunction 
31 
32X(1)=b;             # initial height at t = 0 
33for i=1:M           # start taking steps 
34  X(i+1)=X(i)+r*X(i)*h+sigma*X(i)*( WcaretN(t(i)+h)-WcaretN(t(i)) ); 
35end; 
36 
37plot(t,X)  # plot more permanently 
38axis([0 T X(1)-exp(abs(r)*T+1) X(1)+exp(abs(r)*T+1)]);   # set axis limits 
39grid on; 
40title(["r = ",  num2str(r), ", sigma = ", num2str(sigma), ", steps = ", num2str(M)] ); 
41  
Perl

Perl PDL script for stochasticdes.pl .

1##!/usr/bin/pdl 
2use PDL::NiceSlice; 
3 
4$r = -1;      # growth/decay rate 
5$sigma = 0.5;     # standard deviation 
6$b = 3;       # initial value 
7 
8$M = 100;     # number of steps for EM method to take 
9$T = 1;       # maximum time 
10$h = $T/$M;     # time step 
11$t = zeros( $M + 1 )->xlinvals( 0, $T ); # vector of [0, 1h, 2h, 3h...Mh] 
12$X = zeros( $M + 1 ); 
13 
14$N = 30*($M+1);     # number of steps for the Brownian Motion 
15 
16$p = 0.5; 
17$S = zeros( $N + 1 );   # the random walk 
18$S ( 1 : $N ) .= cumusumover( 2 * ( random($N) <= $p ) - 1 ); 
19 
20# function WcaretN interpolating random walk 
21sub WcaretN { 
22    my $x = shift @_; 
23    $Delta = $T / $N; 
24 
25    $prior      = floor( $x / $Delta ); 
26    $subsequent = ceil( $x / $Delta ); 
27 
28    $retval = 
29        sqrt($Delta) 
30        * ( $S ($prior) 
31            + ( ( $x / $Delta ) - $prior ) 
32            * ( $S ($subsequent) - $S ($prior) ) ); 
33} 
34 
35$X(0) .= $b;    # initial value at t = 0 
36for ($i=0; $i <= $M-1; $i++) {          # start taking steps 
37  $X($i+1) .= $X($i)+$r*$X($i)*$h+$sigma*$X($i)*( WcaretN($t($i)+$h)-WcaretN($t($i)) ); 
38} 
39 
40# file output to use with external plotting programming 
41# such as gnuplot, R, octave, etc. 
42# Start gnuplot, then from gnuplot prompt 
43# plot "stochasticdes.dat" with lines 
44 
45open( F, ">stochasticdes.dat" ) || die "cannot write: $! "; 
46foreach $j ( 0 .. $M ) { 
47    print F $t->range( [$j] ), " ", $X->range( [$j] ), "\n"; 
48} 
49close(F); 
50  
SciPy

Scientific Python script for stochasticdes.py .

1import scipy 
2 
3r = -1.     # growth/decay rate 
4sigma = 0.5   # standard deviation 
5b = 3.      # initial value 
6 
7M = 100     # number of steps for EM method to take 
8T = 1.      # maximum time, note type real 
9h = T/M     # time step 
10t = scipy.linspace(0,T,M+1) # vector of [0, 1h, 2h, 3h...Nh] 
11X = scipy.zeros( M + 1 ) 
12 
13N = 30*(M+1)    # number of steps for the Brownian Motion 
14 
15# the random walk 
16p = 0.5 
17S = scipy.zeros(N+1) 
18S[1:N+1] = scipy.cumsum( 2*( scipy.random.random(N) <= p ) - 1 ) 
19 
20def WcaretN(x): 
21    Delta = T/N                 # T real coerces Delta real 
22    prior = scipy.floor(x/Delta).astype(int) 
23    subsequent = scipy.ceil(x/Delta).astype(int) 
24    return scipy.sqrt(Delta)*(S[prior] + (x/Delta - prior)*(S[subsequent] - S[prior])) 
25 
26X[0] = b                        # iniital value at t = 0 
27for i in range(0,M): 
28    X[i+1] = X[i]+r*X[i]*h+sigma*X[i]*( WcaretN(t[i]+h)-WcaretN(t[i]) ) 
29 
30# optional file output to use with external plotting programming 
31# such as gnuplot, R, octave, etc. 
32# Start gnuplot, then from gnuplot prompt 
33#    plot "stochasticdes.dat" with lines 
34f = open(stochasticdes.dat, w) 
35for j in range(0,M+1): 
36    f.write( str(t[j])+ +str(X[j])+\n); 
37 
38f.close()

__________________________________________________________________________

Problems to Work

Problems to Work for Understanding

  1. Graph an approximation of a multiple of Brownian motion with drift with parameters b = 2, r = 12 and σ = 2 on the interval [0, 5] in two ways.
    1. Flip a coin 25 times, recording whether it comes up Heads or Tails each time, Scoring Y i = +1 for each Heads and Y i = 1 for each flip, also keep track of the accumulated sum Tn = i=1nT i for i = 125. Using N = 5 compute the rescaled approximation Ŵ5(t) = (15)T5t at the values t = 0, 15, 25, 35,245, 5 on [0, 5]. Finally compute and graph the value of X(t) = b + rt + σŴ5(t).
    2. Using the same values of Ŵ5(t) as approximations for W(dt) compute the values of the solution of the stochastic differential equation dX = rdt + σdW,X(0) = b on the interval [0, 5].
  2. Repeat the previous problem with parameters b = 2, r = 12 and σ = 2.
  3. Repeat the previous problem with parameters b = 2, r = 12 and σ = 2.
  4. Modify the scripts to use normally distributed random deivates and then simulate the solution of the stochastic differential equation
    dX(t) = X(t)dt + 2X(t)dX

    on the interval [0, 1] with initial condition X(0) = 1 and step size Δt = 110.

  5. Modify the scripts to use normally distributed random deviates and then simulate the solution of the stochastic differential equation
    dX(t) = tX(t)dt + 2X(t)dX

    on the interval [0, 1] with initial condition X(0) = 1 and step size Δt = 110. Note the difference with the previous problem, now the multiplier of the dt term is a function of time.

  6. Modify the scripts to use general functions G(X) and H(x) and then apply it to the SDE for the
    1. Ornstein-Uhlenbeck process:
      dX = 𝜃(μ X(t))dt + σdW(t),X(t0) = X0.

    2. Cox-Ingersoll-Ross process:
      dX = 𝜃(μ X(t))dt + σ(X(t))dW(t),X(t0) = X0.

    3. the modified Cox-Ingersoll-Ross process:
      dX = 𝜃X(t)dt + 𝜃(X(t))dW(t),X(t0) = X0.

  7. Write a program with parameters r, σ, b, T and N (so dt = TN) that computes and graphs the approximation of the solution of the stochastic differential equation
    dX(t) = rX(t)dt + σX(t)dX

    with X(0) = b on the interval [0,T]. Apply the program to the stochastic differential equation with r = 2, σ = 1, b = 1, and N = 26, 27, 28 on the interval [0, 1].

  8. Generalize the program from the previous problem to include a parameter M for the number of sample paths computed. Then using this program on the interval [0, 1] with M = 1000, and N = 28 compute 𝔼 |Xn X(1)|, where X(1) = ber1 2σ2+σŴ28(1).
  9. Using the program from the previous problem with M = 1000 and N = 25, 26, 27, 28, 29 compute 𝔼 |XN X(1)|, where X(1) = ber1 2σ2+σŴ29(1). Then for the 5 values of N, make a log-log plot of 𝔼 |XN X(1)| on the vertical axis against Δt = 1N on the horizontal axis. Using the slope of the resulting best-fit line experimentally determine the order of convergence γ so that
    𝔼 |XN X(1)| C(Δt)γ.

__________________________________________________________________________

Books

Reading Suggestion:

References

[1]   Linda J. S. Allen. An Introduction to Stochastic Processes with Applications to biology. Pearson Prentice-Hall, 2003.

[2]   M. Baxter and A. Rennie. Financial Calculus: An introduction to derivative pricing. Cambridge University Press, 1996. HG 6024 A2W554.

[3]   Desmond J. Higham. An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Review, 43(3):525–546, 2001.

[4]   P. Kloeden and E. Platen. Numerical Solution of Stochastic Differential Equations, volume 23 of Stochastic Modelling and Applied Probability. Springer, 1992.

__________________________________________________________________________

Links

Outside Readings and Links:

  1. Matlab program files for Stochastic Differential Equations. offers a number of MATLAB routines for stochastic differential equations.

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable effort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reflects the thoughts, interests and opinions of its author. They do not explicitly represent official positions or policies of my employer.

Information on this website is subject to change without notice.

Steve Dunbar’s Home Page, http://www.math.unl.edu/~sdunbar1

Email to Steve Dunbar, sdunbar1 at unl dot edu

Last modified: Processed from LATEX source on August 2, 2016