Chapter 1: Introduction
A differential equation is an equation involoving an (unknown) function y and some of its derivatives. The basic goal is to solve the equation, i.e., to determine which function or functions satisfy the equation. Differential equations come in several types, and our techniques for solving them will differ depending on the type.
Ordinary vs. partial: If y is a function of only one variable t, then our differential equation will involve only derivatives w.r.t. t, and we will call the equation an it ordinary differential equation. If y is a function of more than one variable, then our differential equation will involve partial derivatives, and we will call it a partial differential equation. We will deal almost exclusively with ordinary differential equations in this class.
Systems: Sometimes the rates of change of several functions are interrelated, as with the populations of a predator y(t) and its prey x(t), where x^{¢} = axaxy and y^{¢} = gxy  cy . We call this a system of differential equations, and its solution would involve finding both x(t) and y(t).
Order: Techniques for solving differential equations differ depending upon how many derivatives of our unknown function are involved. The order of a differential equation is the order of the highest derivative appearing in the equation. The Implicit Function Theorem tells us that we can rewrite our equation so that it equates the highest order derivative with an expression involving lower order terms:
Linear vs. nonlinear: A differential equation is linear if it can be written as
(i.e., the function F is linear in the variables y,y^{¢},¼,y^{(n1)}, although it need not be linear in t). A differential equation is nonlinear if it isn't linear! E.g.,
is nonlinear, while
is linear.
Solving a differential equation means to determine which function or functions satisfy the equation. Our solutions come in two flavors: explicit solutions y = y(t) which provide a function of t which satisfies the equation, and implicit solutions which provide an equation g(y,t) = 0 which any explicit solution would have to satisfy. The idea is that we can treat g(y,t) = 0 as implicitly defining y as a function of t; given a specific value t = c for t, we solve (numerically?) g(y,c) = 0 for y to determine the value of the solution to the differential equation at c.
In general, a differential equation y^{¢} = f(t,y) will have many solutions; but typically one particular solution can be specified by requiring one additional condition be met; that y take a specific value y_{0} at a specific point t_{0}. If we think of the time t_{0} as the time at which we ``start'' our solution, then we call the pair of equations
an initial value problem (or IVP). There is a general result which gives conditions guaranteeing that an IVP has a solution:
If y^{¢} = f(t,y) is a differential equation with both f and [(¶f)/(¶y)] continuous for a < t < b and a < y < b, and t_{0} Î (a,b) and y_{0} Î (a,b), then for some h > 0, the initial value problem
has a unique solution for t Î (t_{0}h,t_{0}+h) .
In general, however, the size of the interval where we can guarantee existence (and uniqueness) can be very small, and often depends on the choice of initial value! For example, for the equation
the righthand side is continuous everywhere (as is the partial derivative), but the interval we can choose for the solutions y = 1/(t+c) depends on c, which will depend on the initial condition! And it can never be chosen to be the entire real line.
Failure to satisfy the hypotheses of the result can easily kill both existence and uniqueness. For example, the equation
has many solutions with the initial condition y(0) = 0, such as y = 0 and y = (2t/3)^{3/2} .
In many cases, especially for first order differential equations, we can `see' what a solution should look like without actually finding the solution. For first order equations, y^{¢} = f(t,y), a solution y(t) will satisfy y^{¢}(t) = f(t,y(t)), and so we can think of f(t,y) as giving the slope of the tangent line to the graph of y(t) at the point (t,y(t)). But since the function f is already known, we can draw small line segments at `every' point of the ty plane with slope f(t,y) at the point (t,y); this is called the direction field for our differential equation. A solution to our differential equation is simply a function whose graph is tangent to each of these line segments at every point along the graph. Thinking of the direction field as a velocity vector field (always pointing to the right), our solution is then the path of a particle being pushed along by the velocity vector field. From this point of view it is not hard to believe that every (first order ordinary) differential equation has a solution, in fact many solutions; you just drop a particle in and watch where it goes. Where you drop it is important (it changes where it goes), which really is what gives rise to the notion of an initial value problem; we seek to find the specific solution with the additional initial value y(t_{0}) = y_{0}.
A differential equation is called autonomous if the function f(t,y) is really a function f(y) only of the variable y. We will learn how to solve such equations below; but we can learn alot about the solutions to such an equation simply by understanding the graph of f(y) .
One feature of the solutions is that we can translate in time and get another solution; if y(t) is a solution to y^{¢} = f(y), then so is z(t) = y(t+c) for any constant c, as can be verified by plugging z into the differential equation. This can also be verified geometrically, using the direction field approach. For an autonomous equation, the slope of the direction field is always the same along horizontal lines (since it depends on y, not t), and so if we pick up a solution curve, tangent to the direction field, and translate it in the horizontal direction, it will still be everywhere tangent to the direction field, and so is also a solution.
The key to understanding solutions to such equations y^{¢} = f(y) is to find equilibrium solutions, that is, solutions y = constant =c . Such solutions have derivative 0, and so for such solutions we must have f(c) = 0. The basic idea is that these equilibrium solutions tell us a great deal about the behavior of every solution to the differential equation.
If the function f(y) is continuous, then between the zeroes of f (i.e., the equilibrium solutions of the differential equation) f has all the same sign, and so for the solutions, y^{¢} has the same sign, so y(t) is either always increasing or always decreasing. It cannot cross the equilibrium solutions, since this would violate the uniqueness of solutions to our differential equation. (Here we assume that the derivative of f is also continuous.) If a solution curve becomes asymptotic to a horizontal line, that line must be an equilibrium solution, because the tangent lines along our solution must be becoming horizontal, i.e., f(y(t)) = f(y) is approaching 0 = f(limit of y(t)).
Therefore, the structure of the solutions is very simple; between consecutive equilibrium solutions, the solutions increase or decrease monotonically from one equilibrium to the other. This allows us to classify equilibrium solutions as one of three kinds: stable equilibria, where nearby solutions all converge back to the equilibrium, unstable equilibria, where nearby solutions all diverge away from the equilibrium, and semistable equilibria or nodes, where on one side the solutions converge back, and on the other they diverge away.
The easiest way to assemble this data is to plot the roots of f on a number line (called the phase line, and then determine the sign of f in the intervals in between. Where it is positive, solutions move to the right (i.e., up), while where it is negative they move left. Marking these as arrows, a stable equilibrium has arrows on both sides pointing towards it, and an unstable equilibrium has both arrows pointing away.
Most first order equations cannot be solved by the methods we will present here; the function f(y,t) is too complicated. For such equations, the best we can often do is to approximate the solutions, using numerical techniques. One method is the tangent line method, also known as Euler's method. The idea is that our differential equation y^{¢} = f(t,y) tells us the slope of the tangent line at every point of our solution, and the tangent line can be used to approximate the graph of a function, at least close to the point of tangency. In other words, for a solution to our differential equation,
for tt_{0} small. If we wish to approximate y(t) for a value of t far away from our initial value t_{0}, we use the above idea in several steps. We cut up the interval into n pieces of length h (called the stepsize), and then set
and continue until we reach y_{n}, which will be our approximation to y(t) = y(t_{n}) . Each step can be thought of as a midcourse correction, using information about the direction field at each stage to determine which way the solution is tending.
Calculus teaches us that at each stage the error introduced is approximately proportional to the square of h. So with a stepsize half as large, we will require twice as many steps, but each introduces an error only about onefourth as large, so overall we get an error only half as large. This leads us to conclude that as the stepsize goes to 0, the error between our approximate solution y_{n} and y(t_{n}) goes to 0.
Chapter 2: First Order Differential Equations
There is a class of first order equations for which we can readily find solutions by integration; there are the separable equations. A differential equation is separable if it can be written as
This allows us to `separate the variables' and integrate with respect to dy and dt to get a solution:
In the end, our solution looks like F(y) = G(t) + c, so it defines y implicitly as a function of t , rather than explicitly. In some cases we can invert F to get an explicit solution, but often we cannot.
For example, the separable equation y^{¢} = ty^{2} , y(1) = 2 has solution
so solving the integrals we get (1/y) = (t^{2}/2)+c, or y = 2/(t^{2}+2c) ; setting y = 2 when t = 1 gives c = 1 .
Perhaps the most straightforward sort of differential equation to solve is the first order linear ordinary differential equation
We will typically (following tradition) write such equations in standard form as
For example, near the earth and in the presence of air resistance, the velocity v of a falling object obeys the differential equation v^{¢} = g kv, where g and k are (positive) constants.
There is a general technique for solving such equations, by first solving the associated homogeneous equation
This equation is separable, and so we can use the techniques of the previous section to find that
solves the homogeneous equation. Then we can employ a technique known as variation of parameters to solve the original equation: if we write
and plug into (**), then (since y_{h}^{¢}+py_{h} = 0) we find that A(t) must satisfy
Putting this all together, we find that the solutions to (**) are given by
For example, the differential equation ty^{¢} y = t^{2}+1 , after being rewritten in standard form as y^{¢} (1/t)y = t+(1/t), has homogeneous solution
so we have
and so our solutions are y = t^{2}1+ct, where c is a constant.
But what is c ? Or solution is actually a family of solutions; a particular solution (i.e., a particular value for c) can be found from an initial value y(t_{0}) =y_{0}. For example, if we wished to solve the initial value problem
we can plug t = 2 and y = 5 into our general solution to obtain c = 1 .
Chapter 3: Mathematical Models
In many instances, the rate of change of a quantity can be best analysed by treating the factors that make the quantity go up separately from those that make it go down; each can often be easily understood in isolation. We can then build a differential equation modeling the behavior of the quantity y = y(t) as
As a basic example, we have mixing problems. The basic setup has a solution of a known concentration mixing at a known rate with a solution in a vat, while the mixed solution is poured off at a known rate. The problem is to find the function which gives concentration in the vat at time t. It turns out that it is much easier to find a differential equation\ which describes the amount of solute (e.g., salt) in the solution (e.g., water), rather than the concentration.
If the concentration pouring in is A, at a rate of N, while the solution is pouring out at rate M with concentration A(t)= x(t)/V(t), then if the initial volume is V_{0}, we can compute V(t) = V_{0}+(NM)t . The change in the amount x(t) of solute can be computed as (rate falling in)(rate falling out), which is
This is a linear equation, and so we can solve it using our techniques above.
We can also deal with a succession of mixing problems, the output of one becoming the input of the next, by treating them one at a time; the only change in the setup above is that the incoming concentration for the next vat (to solve for x_{i+1}(t)) would be the concentration x_{i}(t)/V_{i}(t) found by solving the equation for the previous vat.
Another situation where this kind of analysis proves successful is in modeling population growth. The idea is that if y is the population at time t, then
Typically, the birth rate is proportional to the population, i.e. is ry, while the death rate is either modeled as being proportional to the population (Malthusian model) or is a sum (logistic model); one part is proportional to the population (death by ``natural causes''), the other is proportional to the square of the population (this typically represents contact between individuals,arising from competition for food, overcrowding, etc.), i.e., is ky^{2} . Put together, and combining the two terms proportional to population, we obtain
Both equations are separable, and so we can use phase lines to understand their longterm behavior, as well as finding explicit solutions (using partial fractions, for the logistic equation).
Newton's Law of Cooling: This states that the rate of change of the temperature T(t) of an object is proportional to the difference between its temperature and the ambient temperature of the air around it. The constant of proportionality depends upon the particular object (and the medium, e.g., air or water) it is in. In other words,
Since a cold object will warm up, and a warm object will cool down, this means that the constant k should be positive. Writing the equation as
we find the solution (after solving the IVP)
Typically, k is not given, but can be determined by knowing the temperature at some other time t_{1}, by plugging into the equation above and solving for k.
If we wish to model the motion of an object, whose position at time t is given by x(t), then (setting v(t) = x^{¢}(t)) Newton's Second Law of Motion tells us that
When we can understand these forces, in terms of t and v, we can build a first order differential equation, which we can then bring our techniques to bear to solve. Typical forces include:
gravity: F_{g} = mg or F_{g} = mg, depending upon whether we think of the positive direction as down (giving +) or up (giving ). g = 9.8 m/sec^{2} = 32 ft/sec^{2} (approximately)
air resisitance: this is typically modeled either as F_{a} = kv (for smallish velocities) or F_{a} = kv^{2} (for large velocities). It always acts to push our velocity towards 0, hence the  sign.
external force: F_{e} = g(t) ; this represents a force that ``follows along'' the object and tries to push it in a direction that is ``preprogrammed'' in time.
With these sorts of forces, we get a general equation
which we can solve by the methods we have developed. For example, ignoring external forces and assume the positive direction is ``down'', we have the initial value problem
with solution
As t®¥, v(t)®[mg/k] = the terminal velocity.
Chapter 4: Linear Second Order Equations
Basic object of study: second order linear differential equations. Standard form:
Initial value problem: we need two initial conditions
Basic existence and uniqueness: if p(t), q(t), and g(t) are continuous on an interval around t_{0}, then any initial value problem has a unique solution on that interval. Our Basic goal: find the solution!
(*) is called homogeneous if g(t) = 0 ; otherwise it is inhomogeneous. (*) is an equation with constant coefficients if p(t) and q(t) are constants.
Our main new technique for exploring these equations will be operator notation. We write L[y] = y^{¢¢}+p(t)y^{¢}+ q(t)y (this is called a linear operator), then a solution to (*) is a function y with L[y] = g(t). Some familiar linear operators: D^{n}[y] = y^{(n)} ( the nth derivative operator). The operator is called linear because
For a linear differential equation, L[c_{1}y_{1}+c_{2}y_{2}] = c_{1}L[y_{1}]+c_{2}L[y_{2}], and so if y_{1} and y_{2} are both solutions to L[y] = 0 then so is c_{1}y_{1}+c_{2}y_{2} . c_{1}y_{1}+c_{2}y_{2} is called a linear combination of y_{1} and y_{2}. This fact is called the Principle of Superposition: more generally, for a linear operator, if L[y_{1}] = g_{1}(t) and L[y_{2}] = g_{2}(t), then L[y_{1}+y_{2}] = g_{1}(t)+g_{2}(t) .
Basic idea: with (the right) two solutions y_{1}, y_{2} to a homogeneous linear equation
we can solve any initial value problem, by choosing the right linear combination: we need to solve
for the constants c_{1} and c_{2}; then y = c_{1}y_{1}+c_{2}y_{2} is our solution. This we can do directly, as a pair of linear equations, by solving one equation for one of the constants, and plugging into the other equation, or we can use the formulas

 



 



 



 


where 

 



 


W is called the Wronskian (determinant) of y_{1} and y_{2} at t_{0} . The Wronskian is closely related to the concept of linear independence of a collection y_{1},¼,y_{n} of functions; such a collection is linearly independent if the only linear combination c_{1}y_{1}+ ¼+ c_{n}y_{n} which is equal to the 0 function is the one with c_{1} = ¼ = c_{n} = 0 .
Two functions y_{1} and y_{2} are linearly independent if their Wronksian is nonzero at some point; for a pair of solutions to (***), it turns out that the Wronskian is always equal to a constant multiple of
and so is either always 0 or never 0. We call a pair of linearly independent solutions to (***) a pair of fundamental solutions. By our above discussion, we can solve any initial value problem for (***) as a linear combination of fundamental solutions y_{1} and y_{2}. By our existence and uniqueness result, this gives us:
If y_{1} and y_{2} are a fundamental set of solutions to the differential equation (***), then any solution to (***) can be expressed as a linear combination c_{1}y_{1}+c_{2}y_{2} of y_{1} and y_{2}.