Next: Nonlinear Least Squares
Up: Math 833  Notes
Previous: Inexact Newton
We base our work on the following approximations (basically, Taylor series):

(4) 

(5) 
From equation 4, we get the forward difference approximation:
From equation 5, we get the backward difference approximation:
If we subtract equation 5 from 4, we get
This is the central difference formula.
Now, 4 plus 5 gives the Second Central Difference Approximation .
Suppose that the error of evaluation of is
. If is not too complicated, then we can take
. So, in the forward difference, we really calculate
The first error is a mathematical truncation error, while the second is the computational truncation error. We would like to ``balance" the errors 
So, perhaps
In other words, it would be pointless to choose to be less than
. So, if our error is , there is no point to choose
. So, once we do this balancing, we get
.
Hence, we can always expect our error to be around
.
For central differences, doing this balancing again, we find that we should take
In this case, we get that the total truncation error is
(which is definitely smaller than
. If we try to take smaller than this, the resulting errors start to get larger!
For second central,
The truncation error will be about
.
For the second derivative, we could do central differences done twice:
To balance, we get
and we get a truncation error of
.
Now for
:
Or we could write:
For the Hessian,
It just so happens that (from a 2d Taylor expansion):
We already know how to do the second central approximation, so we can approximate the Hessian by filling in the appropriate formulas.
Wednesday, 462005:
One can show, using the Newton convergence proof and the Banach Lemma:
If matrix is invertible and matrix is such that
, then
is invertble and
then one can show the following:
If a Newton step
is solved using
and
in place of
and
, then for sufficiently small (in norm) and sufficiently close to the local minimizer at which the sufficiency conditions are satisfied,
for some positive constant .
Remarks:
 If both and are zero, we just get the Newton estimate of quadratic convergence.
 If
and
, then there is no guarantee that the error terms will converge! (no way to overcome the term
.
 If
, then convergence is slowed down from quadratic to linear or superlinear if
. Thus, what is effected by is the rate of convergence.
 So, an inaccurate Hessian is not as important as an inaccurate gradient.
Next: Nonlinear Least Squares
Up: Math 833  Notes
Previous: Inexact Newton
Brian Bockelman
20050609