Math 221

Basic techniques for the second exam


Basic object of study: second order linear differential equations

y+p(t)y+ q(t)y = g(t)(*)

Initial value problem:

y(t0)=y0 and y(t0)=y0

Basic fact: if p(t), q(t), and g(t) are continuous on an interval around t0, then any initial value problem has a unique solution on that interval. Our Basic goal: find the solution!

Homogeneous: g(t) Constant coefficients: p(t) and q(t) are constant.

Start small: homogeneous with constant coefficients:

ay+by+cy = 0

Basic idea: guess that y = ert, and plug in! Get:

(ar2+br+c)ert = 0 , so ar2+br+c = 0

Solve: get (typically) two roots r1, r2, so y1er1 t and y2er2 t are both solutions.

The equation ar2+br+c = 0 is called the characteristic equation for our differential equation.

Operator notation: write L[y] = y+p(t)y+ q(t)y (this is called a linear operator), then a solution to (*) is a function y with L[y] = g(t).

For a linear differential equation, L[c1y1+c2y2] = c1L[y1]+c2L[yt], and so if y1 and y2 are both solutions to L[y] = 0 then so is c1y1+c2y2 . c1y1+c2y2 is called a linear combination of y1 and y2. This is called the Principle of Superposition: more generally, if L[y1] = g1(t) and L[y2] = g2(t), then L[y1+y2] = g1(t)+g2(t) .

With (the right) two solutions y1, y2 to a homogeneous equation

y+p(t)y+ q(t)y = 0 (**)

we can solve any initial value problem, by choosing the right linear combination: we need to solve

c1y1(t0)+ c2y2(t0) = y0

c1y1(t0)+ c2y2(t0) = y0

for the constants c1 amd c2; then y = c1y1+c2y2 is our solution. This we can do directly, as a pair of linear equations, by solving one equation for one of the constants, and plugging into the other equation, or we can use the formulas

c1 = (| 
y0
y2(t0)
y0
y2(t0)
|)/| 
y1(t0)
y2(t0)
y1(t0)
y2(t0)
|
c2 = (| 
y1(t0)
y0
y1(t0)
y0
|)/| 
y1(t0)
y2(t0)
y1(t0)
y2(t0)
|

where | 
a
b
c
d
| = ad-bc . This makes it clear that a solution exists (i.e., we have the `right' pair of functions), provided that the quantity

W = W(y1,y2)(t0) = | 
y1(t0)
y2(t0)
y1(t0)
y2(t0)
| 0

W is called the Wronskian (determinant) of y1 and y2 at t0 . The Wronskian is closely related to the concept of linear independence of a collection y1,,yn of functions; such a collection is linearly independent if the only linear combination c1y1+ + cnyn which is equal to the 0 function is the one with c1 = = cn = 0 .

Two functions y1 and y2 are linearly independent if their Wronksian is non-zero at some point; for a pair of solutions to (**), it turns out that the Wronskian is always equal to a constant multiple of

exp(-p(t) dt)

and so is either always 0 or never 0. We call a pair of linearly independent solutions to (**) a pair of fundamental solutions. By our above discussion, we can solve any initial value problem for (**) as a linear combination of fundamental solutions y1 and y2. By our existence and uniqueness result, this give us:


If y1 and y2 are a fundamental set of solutions to the differential equation (**), then any solution to (**) can be expressed as a linear combination c1y1+c2y2 or y1 and y2.


So to solve an initial value problem for (**), all we need is a pair of fundamental solutions. For an equation with constant coefficients, we do this by finding the roots of the characteristic equation ar2+br+c = 0 . We have the following basic facts:

If the roots of the characteristic equation are real and distinct, r1 r2, then a fundamental set of solutions is

y1 = er1t,y2 = er2t

If the root of the characteristic equation are complex abi, then a fundamental set of solutions is

y1 = eatcos(bt),y2 = eatsin(bt)

If the roots of the characteristic equation are repeated (and therefore real), r1 = r2 = r, then a fundamental set of solutions is

y1 = ert,y2 = tert


In showing the last of these facts, we introduced a general technique for finding a second, linearly independent, solution y2 to (**), given a (non-zero) solution y1; this was called reduction of order; if y1 is a solution to (**), then so is

y2y1(t)[(exp(-p(t) dt))/((y1(t))2)] dt

This formula arises by assuming that y2(t) = c(t)y1(t), and then determining what differential equation c(t) must satisfy! It turns out to be a first-order equation (hence the name reduction of order).


Much of what we just did for second order equation goes through without any change for even higher order (linear) equations:

L[y]y(n)  = a1(t)y(n-1)+ + an-1(t)y+an(t)y = g(t) (!)

and its associated homogeneous equation\

y(n)  = a1(t)y(n-1)+ + an-1(t)y+an(t)y = 0 (!!)

In this case the correct notion of an initial value problem requires us to specify the values, at t0, of y and all its derivatives up to the (n-1)st:

y(t0) = y0, y(t0) = y0, , y(n-1)(t0) = y0(n-1)

As with the second order case, we have a principle of superposition: L[y1] = g1 and L[y2] = g2, then L[y1+y2] = g1+g2 . This means that linear combinations of solutions to the homogeneous equation (!!) are also solutions. And the general solution to (!!) can always be obtained (uniquely) as a linear combination of n linearly independent (or fundamental) solutions. Linear independence can be determined by computing a Wronskian determinant W(y1,,yn).

The theory we developed for homogeneous equations with constant coefficients can be similarly extended. The equation

a0y(n)+a1y(n-1)++an-1y+any = 0

has a fundamental set of solutions determined by its characteristic equation

a0rn++an-1r + an = 0

Real roots r correspond to solutions exp(rt) ; complex roots to solutions exp(at)cos(bt) and exp(at)sin(bt) . The only extra wrinkle is that we can have repeated roots which repeat many times, and even repeated complex roots! For each, we do as we did before and create new fundamental solutions by multiplying our basic solution by t, as many times as it repeats. For example, the equation

y(4)+2y+y = 0

has a characteristic equation with roots i,i,-i, and -i, and so its fundamental solutions are

cos(t), tcos(t), sin(t), and tsin(t)


Our final concern is inhomogeneous linear equations

L[y]y(n) + a1(t)y(n-1)+ + an-1(t)y+an(t)y = g(t) (!)

with g(t) 0 . The principle of superposition tells us that for any pair of solutions Y1, Y2 to (!), L[Y2-Y1] = 0, and so if we have a fundamental set of solutions to the associated homogeneous equation, y1,,yn, we can write

Y2 = Y1+c1y1++cnyn

In other words, we can find any solution to (!) by finding one particular solution, together with a fundamental set of solutions to the associated homogeneous equation (!!). Any initial value problem can then be solved by solving the system of equations

Y1(t0)+c1y1(t0) ++cnyn(t0) = g(t0)

Y1(t0)+c1y1(t0) ++cnyn(t0) = g(t0)

all the way to

Y1(n-1)(t0)+c1y1(n-1)(t0) ++cnyn(n-1)(t0) = g(n-1)(t0)

for the constants c1,,cn .

The only part of this we haven't really explored yet is finding a particular solution to (!). for this we have two techniques. The first is called the Method of Undetermined Coefficients.

Important: This works only for equations with constant coefficients!

The basic idea behind the technique is that for most kinds of functions, like polynomials, expoential, sines and cosines, or products of these, all of the functions derivatives are of the same kind. So if the function g(t) in

L[y]a0y(n) + a1y(n-1)+ + an-1y+any = g(t) (!)

is one of these kinds, what we do is guess that our solution y is the same kind. In particular,


If g is a polynomial of degree n, we set y to be a (different) polynomial of degree n,

If g is a multiple of an exponential exp(rt), we set y to be a a multiple cexp(rt) of g,

If g is a multiple of sin(bt) or cos(bt), we set y to be a linear combination asin(bt)+bcos(bt),

If g(t) = exp(rt)cos(bt) (or has a sine), we set y to be aexp(rt)sin(bt)+bexp(rt)cos(bt),

If g is a polynomial of degree n times one of these, we set y to be a (different) polynomial of degree n time the corresponding function above.


Then we must plug this function into (!), and solve for the undetermined coefficients.

Of course, there is one wrinkle; sometimes our choice of y cannot work, because it is a solution to the associated homogeneous equation. For example, for the equation

L[y]y+ y = cos(t)

The function y = acos(t)+bsin(t) will never solve it, because for such a function, L[y] In this case what we must do is multiply our guess by t, or more generally, by a high enough power to insure that our guess is not a solution to the homogeneous equation. For this, we must first determine the number of times the root which corresponds to our target solution occurs among the roots of the associated characteristic equation. This can be a trifle tricky to determine; for example, for the equation

y-2y+ y = tet

we should guess that our solution is y = t(at+b)et, since our original guess would be y = (at+b)et, but this is a solution to the homogeneous equation, while t times it is not; but for

y-2y+ y = 3et

we should guess that our solution is y = at2et, since tet is still a solution to the homogeneous equation, but y = t2et is not.

Finally, if our function g(t) is a linear combination of such functions, we can use this method to solve L[y]each piece, and then use the Principle of Superposition to find our solution by taking a linear combination.


Our other technique works (in theory) for any linear equation; we will restrict our attention to second order equations, for sake of simplicity. It is called variation of parameters, and starts with a pair of fundamental solutions y1,yt to the associated homogeneous equation\

y+p(t)y+ q(t)y = 0 (**)

and then guess that the solution to our inhomogeneous equation

y+p(t)y+ q(t)y = g(t)(*)

is of the form y(t) = c1(t)y1(t)+c2(t)y2(t), and plug in. The resulting equation is too complicated, but if we make the simplifying assumption

c1(t)y1(t)+c2(t)y2(t) = 0

then the equation becomes

c1(t)y1(t)+c2(t)y2(t) = g(t)

which we can solve:

c1 = [(-gy2)/(| 
y1(t0)
y2(t0)
y1(t0)
y2(t0)
|)] c2 = [(gy1)/(| 
y1(t0)
y2(t0)
y1(t0)
y2(t0)
|)]

Here again, the by now familiar Wronskian appears! Note that we must still integrate these functions, to determine c1 and c2 .


File translated from TEX by TTH, version 0.9.