%%% -*-TeX-*-
%%% itosformula.tex.orig
%%% Prettyprinted by texpretty lex version 0.02 [21-May-2001]
%%% on Thu Jun 26 05:53:18 2014
%%% for Steve Dunbar (sdunbar@family-desktop)
\section{It\^{o}'s Formula}
\subsection*{Section Starter Question} State the Taylor expansion of a function \(
f(x) \) up to order \( 1 \). What is the relation of this expansion to
the Mean Value Theorem of calculus? What is the relation of this
expansion to the Fundamental Theorem of calculus?
\subsubsection*{Motivation, Examples and Counterexamples}
We need some operational rules that allow us to manipulate stochastic
processes with stochastic calculus.
The important thing to know about traditional differential calculus is
that it is:
\begin{itemize}
\item
the Fundamental Theorem of Calculus;
\item
the chain rule; and
\item
Taylor polynomials and Taylor series
\end{itemize}
that enable us to calculate with functions. A deeper understanding of
calculus recognizes that these three calculus theorems are all aspects
of the same fundamental idea. Likewise we need similar rules and
formulas for stochastic processes. It\^{o}'s formula will perform that
function for us. However, It\^{o}'s formula acts in the capacity of all
three of the calculus theorems, and we have only one such theorem for
stochastic calculus.
The next example will show us that we will need some new rules for
stochastic calculus, the old rules from calculus will no longer make
sense.
\begin{example}
Consider the process which is the square of the Wiener process:
\[
Y(t) = W(t)^2.
\] We notice that this process is always non-negative, \( Y(0) = 0 \),
\( Y \) has infinitely many zeroes on \( t > 0 \) and \( \E{Y(t)} =
\E{W(t)^2} = t \). What more can we say about this process? For
example, what is the stochastic differential of \( Y(t) \) and what
would that tell us about \( Y(t) \)?
Using naive calculus, we might conjecture using the ordinary chain
rule
\[
\df{Y} = 2 W(t) \df{W}(t).
\] If that were true then the Fundamental Theorem of Calculus would
imply
\[
Y(t) = \int_0^t \df{Y} = \int_0^t 2 W(t) \df{W}(t)
\] should also be true. But consider \( \int_0^t 2 W(t) \df{W}(t) \).
It ought to correspond to a limit of a summation (for instance a
Riemann-Stieltjes left sum):
\[
\int_0^t 2 W(t) \df{W}(t) \approx \sum_{i=1}^{n} 2 W( (i-1)t/n) [ W(it/n)
- W((i-)t/n) ]
\] But look at this carefully: \( W((i-1)t/n) = W((i-1)t/n) - W(0) \)
is independent of \( [ W(it/n) - W((i-)t/n) ] \) by property 2 of
the definition of the Wiener process. Therefore, if what we
conjecture is true, the expected
value of the summation will be zero:
\begin{align*}
\E{Y(t)} &= \E{ \int_0^t 2 W(t) \df{W}(t)} \\
&= \E{ \lim_{n \to \infty} \sum_{i=1}^{n} 2 W( (i-1)t/n) ( W(it/n)
- W((i-1)t/n) ) } \\
&= \lim_{n \to \infty} \sum_{i=1}^{n} 2 \E{ [ W( (i-1)t/n) - W(0)
] [ W(it/n) - W((i-1)t/n) ]} \\
&= 0.
\end{align*}
(Note the assumption that the limit and the expectation can be
interchanged!)
But the mean of \( Y(t) = W(t)^2 \) is \( t \) which is definitely
not zero! The two stochastic processes don't agree even in the
mean, so something is not right! If we agree that the integral
definition and limit processes should be preserved, then the rules
of calculus will have to change.
We can see how the rules of calculus must change by rearranging the
summation. Use the simple algebraic identity
\[
2b(a-b) = \left( a^2 - b^2 - (a-b)^2 \right)
\] to re-write
\begin{align*}
&\int_0^t 2 W(t) \df{W}(t) = \lim_{n \to \infty} \sum_{i=1}^{n} 2 W
( (i-1)t/n) [ W(it/n) - W((i-1)t/n) ] \\
&\qquad = \lim_{n \to \infty} \sum_{i=1}^{n} \left( W(it/n)^2 - W((i-1)t/n)^2
- \left( W(it/n) - W((i-1)t/n)\right)^{2} \right) \\
&\qquad = \lim_{n \to \infty} \left( W(t)^2 - W(0)^2 - \sum_{i=1}^{n}
\left( W(it/n) - W((i-1)t/n) \right)^2 \right) \\
&\qquad = W(t)^2 - \lim_{n \to \infty} \sum_{i=1}^{n} \left( W(it/n) -
W((i-1)t/n) \right)^2
\end{align*}
We recognize the second term in the last expression as being the
quadratic variation of Wiener process, which we have already
evaluated, and so
\[
\int_0^t 2 W(t) \df{W}(t) = W(t)^2 - t.
\]
\end{example}
\subsubsection*{It\^{o}'s Formula and It\^{o} calculus}
It\^{o}'s formula is an expansion expressing a stochastic
process in terms of the deterministic differential and the
Wiener process differential, that is, the stochastic
differential equation for the process.
\begin{theorem}[It\^{o}'s formula]
If \( Y(t) \) is scaled Wiener process with drift, satisfying \( \df{Y}
= r \, \df{t} + \sigma \, \df{W} \) and \( f \) is a twice continuously
differentiable function, then \( Z(t) = f(Y(t)) \) is also a
stochastic process satisfying the stochastic differential equation
\[
\df{Z} = (r f'(Y) + (\sigma^2/2) f''(Y)) \, \df{t} + (\sigma f'(Y)) \,
\df{W}
\]
\end{theorem}
In words, It\^{o}'s formula in this form tells us how to expand (in
analogy with the chain rule or Taylor's formula) the differential of a
process which is defined as an elementary function of scaled Brownian
motion with drift.
\emph{It\^{o}'s formula} is often also called \emph{It\^{o}'s
lemma} by other authors and texts. Most authors believe that
this result is more important than a mere lemma, and so the text adopts
the alternative name of ``formula''. ``Formula'' also
emphasizes the analogy with the chain ``rule'' and the Taylor
``expansion''.
\begin{example}
Consider \( Z(t) = W(t)^2 \). Here the stochastic process is
standard Brownian Motion, so \( r=0 \) and \( \sigma = 1 \) so \( \df{Y}
= \df{W} \). The twice continuously differentiable function \( f \) is
the squaring function, \( f(x) = x^2 \), \( f'(x) = 2x \) and \( f''
(x) = 2 \). Then according to It\^{o}'s formula:
\[
\df{(W^2)} = (0 \cdot (2 W(t)) + (1/2)(2) ) \df{t} + (1 \cdot 2 W(t)) \df{W}
= \df{t} + 2 W(t) \df{W}
\] Notice the additional \( \df{t} \) term! Note also that if we
repeated the integration steps above in the example, we would obtain
\( W(t)^2 \) as expected!
\end{example}
\begin{example}
Consider geometric Brownian motion
\[
\exp( r t + \sigma W(t) ).
\] What SDE does geometric Brownian motion follow? Take \( Y(t) = r
t + \sigma W(t) \), so that \( \df{Y} = r \df{t} + \sigma \df{W} \). Then
geometric Brownian motion can be written as \( Z(t) = \exp(Y(t)) \),
so \( f \) is the exponential function. It\^{o}'s formula is
\[
\df{Z} = (r f'(Y(t)) + (1/2)\sigma^2 f'' (Y(t)))\df{t} + \sigma f'(Y) \df{W}
\] Computing the derivative of the exponential function and
evaluating, \( f'(Y(t)) = \exp(Y(t)) = Z(t) \) and likewise for the
second derivative. Hence
\[
\df{Z} = (r + (1/2)\sigma^2) Z(t) \df{t} + \sigma Z(t) \df{W}
\]
\end{example}
The case where \( \df{Y} = \df{W} \), that is the base process is standard
Brownian Motion so \( Z = f(W) \), occurs commonly enough that we record
It\^{o}'s formula for this special case:
\begin{corollary}[It\^{o}'s Formula applied to functions of standard
Brownian
Motion]
If \( f \) is a twice continuously differentiable function, then \(
Z(t) = f(W(t)) \) is also a stochastic process satisfying the
stochastic differential equation
\[
\df{Z} = \df{f(W)} = (1/2)f''(W) \, \df{t} + f'(W) \, \df{W}.
\]
\end{corollary}
\subsubsection*{Guessing Processes from SDEs with It\^{o}'s Formula}
One of the key needs we will have is to go in the opposite direction and
convert SDEs to processes, in other words to solve SDEs. We take
guidance from ordinary differential equations, where finding solutions
to differential equations comes from judicious guessing based on a
thorough understanding and familiarity with the chain rule. For SDEs the
solution depends on inspired guesses based on a thorough understanding
of the formulas of stochastic calculus. Following the guess we require
a proof that the proposed solution is an actual solution, again using
the formulas of stochastic calculus.
%% Such a solution to an SDE is called a diffusion.
A few rare examples of SDEs can be solved with explicit familiar
functions. This is just like ODEs in that the solutions of many simple
differential equations cannot be solved in terms of elementary
functions. The solutions of the differential equations define new
functions which are useful in applications. Likewise, the solution of
an SDE gives us a way of defining new processes which are useful.
\begin{example}
Suppose we are asked to solve the SDE
\[
\df{Z}(t) = \sigma Z(t) \df{W}.
\] We need an inspired guess, so we try
\[
\exp( r t + \sigma W(t) )
\] where \( r \) is a constant to be determined while the \( \sigma \)
term is given in the SDE\@. It\^{o}'s formula for the guess is
\[
\df{Z} = (r + (1/2)\sigma^2) Z(t) \df{t} + \sigma Z(t) \df{W}.
\] We notice that the stochastic term (or Wiener process
differential term) is the same as the SDE\@. We need to choose the
constant \( r \) appropriately to eliminate the deterministic or drift
differential term. If we choose \( r \) to be \( -(1/2)\sigma^2 \)
then the drift term in the differential equation would match the SDE
we have to solve as well. We therefore guess
\[
Y(t) = \exp( \sigma W(t) - (1/2)\sigma^2 t).
\] We should double check by applying It\^{o}'s formula.
Soluble SDEs are scarce, and this one is special enough to give a
name. It is the \emph{Dol\`{e}an's exponential of Brownian motion}.
\end{example}
\subsubsection*{Sources}
This discussion is adapted from \emph{Financial Calculus: An
introduction to derivative pricing} by M Baxter, and A. Rennie,
Cambridge University Press, 1996, pages 52--62 and ``An Algorithmic
Introduction to the Numerical Simulation of Stochastic Differential
Equations'', by Desmond J. Higham, in SIAM Review, Vol. 43, No. 3,
pages 525--546, 2001.
\nocite{}
\nocite{}
\subsection*{Key Concepts}
\begin{enumerate}
\item
It\^{o}'s formula is an expansion expressing a stochastic
process in terms of the deterministic differential and the
Wiener process differential, that is, the stochastic
differential equation for the process.
\item
Solving stochastic differential equations follows by guessing
solutions based on comparison with the form of It\^{o}'s
formula.
\end{enumerate}
\subsection*{Vocabulary}
\begin{enumerate}
\item
\emph{It\^{o}'s formula} is often also called \emph{It\^{o}'s
lemma} by other authors and texts. Some authors believe that
this result is more important than a mere lemma, and so I adopt
the alternative name of ``formula''. ``Formula'' also
emphasizes the analogy with the chain ``rule'' and the Taylor
``expansion''.
\end{enumerate}
\subsection*{Problems to Work for Understanding}
\begin{enumerate}
\item
Find the solution of the stochastic differential equation
\[
\df{Y}(t) = Y(t) \df{t} + 2 Y(t) \df{W}.
\]
\item
Find the solution of the stochastic differential equation
\[
\df{Y}(t) = t Y(t) \df{t} + 2 Y(t) \df{W}.
\] Note the difference with the previous problem, now the
multiplier of the \( \df{t} \) term is a function of time.
\item
Find the solution of the stochastic differential equation
\[
\df{Y}(t) = \mu Y(t) \df{t} + \sigma Y(t) \df{W}.
\]
\item
Find the solution of the stochastic differential equation
\[
\df{Y}(t) = \mu t Y(t) \df{t} + \sigma Y(t) \df{W}
\] Note the difference with the previous problem, now the
multiplier of the \( \df{t} \) term is a function of time.
\item
Find the solution of the stochastic differential equation
\[
\df{Y}(t) = \mu(t) Y(t) \df{t} + \sigma Y(t) \df{W}
\] Note the difference with the previous problem, now the
multiplier of the \( \df{t} \) term is a general (technically, a
locally bounded integrable) function of time.
\end{enumerate}
\begin{enumerate}
\item
``An Algorithmic Introduction to the Numerical Simulation of
Stochastic Differential Equations'', by Desmond J. Higham, in
SIAM Review, Vol. 43, No. 3, pp. 525-546, 2001.
\item
\textit{Financial Calculus: An introduction to derivative
pricing} by M Baxter, and A. Rennie, Cambridge University Press,
1996, pages 52--62.
\end{enumerate}
\subsection*{Outside Readings and Links:}
% \begin{enumerate}
% \item
% \item
% \item
% \item
% \end{enumerate}