Chapter 3: Second Order Linear Equations
Newton: m u^{¢¢} = sum of the forces acting on the object. These include:
gravity: F_{g} = mg
the spring: F_{s} = k(u+L) (Hooke's Law)
friction: F_{f} = gu^{¢}
a possible external force: F_{e} = f(t)
Equilibrium: gravity and spring forces balance out; mgkL = 0 (use to compute k !)
So: m u^{¢¢} = kugu^{¢} + f(t), i.e.,
Some special cases:
No friction (g = 0) = undamped, no external force (= free vibration); solutions are
where w = [Ö(k/m)] = the natural frequency of the system, C = amplitude of the vibration, d (= `delay') = phase angle
C = Ö(c_{1}^{2}+c_{2}^{2}), tan(d) = c_{2}/c_{1}
T = 2p/w_{0} = period of the vibration. Note: stiffer spring (= larger k) gives higher frequency, shorter period. Larger m gives the opposite.
Damped free vibrations; solutions depend on g^{2}4km = discriminant
g^{2} > 4km (overdamped); fundamental solutions are e^{r1 t}, e^{r2 t}, r_{1},r_{2} < 0
g^{2} = 4km (critically damped); fundamental solutions are e^{rt}, te^{rt}, r < 0
g^{2} < 4km (underdamped); fundamental solutions are e^{rt}cos(wt), e^{rt}sin(wt), r < 0 , w = Ö{w_{0}^{2}(g/2m)^{2}}
In each case, solutions tend to 0 as t goes to ¥. In first two cases, the solution has at most on local max or min; in the third case, note that the frequency of the periodic part of the motion is smaller than the natural frequency. T = 2p/w is called the quasiperiod of the vibration.
Undamped: if w ¹ w_{0}, then (using undetermined coefficients) solution is
This is the sum of two vibrations with different frequencies.
In the special case u(0) = 0, u^{¢}(0) = 0 (starting at rest), we can further simplify:
When w is close to w_{0}, this illustrates the concept of beats; we have a high frequency vibration (the second sine) with amplitude a low frequency vibration (the first sine). the mass essentially vibrates rapidly between to sine curves.
When w = w_{0}, our forcing term is a solution to the homogeneous equation, so the general solution, instead, is
In this case, as t goes to ¥, the amplitude of the second oscillation goes to ¥; the solution, essentially, resonates with the forcing term. (Basically, you are `feeding' the system at it's natural frequency.) This illustrates the phenomenon of resonance.
Finally, if we include friction (g ¹ 0), then the solution turns out to be
But since g > 0, the homogeneous solutions will tend to 0 as t®¥; they are called the transient solution. (Basically, they just allow us to solve any initial value problem. We can then conclude that any energy given to the susystem is dissipated over time; leaving only the energy imparted by the forcing term to drive the system along.) The other term is called the forced response, or steadystate solution.
Note that the amplitude C of the forced response goes to 0 as the driving frequency, w, goes to ¥. Notice also that tan(d) can never be 0, so the forced response is always out of phase with the forcing term. When we are driving the system at it's natural frequency w_{0}, the system is 90 degrees out of phase; as w®¥, the system approaches being 180 degrees out of phase, i.e., the motion of the mass is almost exactly opposite to the force being externally applied!
Chapter 7: Systems of first order linear equations
Ex: multiple spring  mass system:
u_{1} = position of first mass, u_{2} = position of second mass, then we find , by analyzing the forces involved, that
where the symbols have similar meanings to our ordinary situation. The appropriate notion of an initial value problem would be to know the initial positions and initial velocities of both masses at a fixed time.
Ex: mixing with multiple tanks:
Tank 1 flows to tank 2 with rate r_{1}, 2 to 3 with rate r_{2}, and 3 to 1 with rate r_{3}, then if the u's are the amount of solute in each tank, and the V's are the volumes in each, we have
Here the appropriate notion of initial value problem is to know the values of each of u_{1},u_{2},u_{3} at a fixed time.
An important class of examples: any ordinary differential equation
can be turned into a system of first order equations by setting
giving the system of equations
can be expressed as
(
 


 


 

 


 


More generally, we can multiply matrices, getting another matrix. Using the notation A = (a_{ij}), where a_{ij} is the entry in the ith row and jth column of A, we write
AB = (c_{ij}), where c_{ij} = the dot product of the ith row of A and the jth column of B .
Amazingly, this product has alot of the properties we are used to: if we add matrices by adding their ijth entries (to get the ijth entry of the sum), and multiply by a scalar c by multliplying each entry by c, then we have:
However what we do not NOT NOT have is AB = BA; usually, matrix multiplication does not NOT NOT commute! We will need the special matrix I = (d_{ij}), where d_{ij} = 1 if i = j, and 0 if i ¹ j . This matrix has the property that IA = AI = A for all matrices A, i.e., it acts like the number 1 under multiplication.
If A is a matrix whose entries are functions (a_{ij}(t) = A(t), then we can make sense of its derivative; A^{¢}(t) = (a_{ij}^{¢}(t)) . Then we have the properties:
Note the order of multiplication in the second formula; it's important!
Such vectors are called eigenvectors, and their associated numbers k are called eigenvalues. Our approach to finding such vectors and numbers is to write the equation as
First we find the right values k. Linear algebra teaches us that (*) has a (nonzero vector) solution exactly when det(AkI) = 0, where "det" stands for the determinant. We have already run into this concept; for a 2by2 matrix

 


There are similar formulas for n×n matrices for any n, but we will focus on n = 2 to make our calculations simpler. If we compute det(AkI), we find that (*) has a solution exactly when
This is a quadratic polynomial, so it (usually) has two roots, r_{1},r_{2} . Once we have the roots, we go back and solve (Ar_{i} I)[v\vec] = [(\vec]0), i.e,
It turns out that the second equation is always redundant (although this is really only true of 2×2 systems), and we can find [v\vec] by choosing any convenient (nonzero) value for x_{1} or x_{2}, and solving for the other.
where A is a matrix whose entries are all constants. Our basic procedure will be to guess that the solution is u = e^{kt}[v\vec]_{0} for some vector (with constant entries) [v\vec]_{0}. If we plug such a function into (**), we find that
and so we find that we need A[v\vec]_{0} = k[v\vec]_{0} . In other words, we need the eigenvalues for A, and their corresponding eigenvectors. This gives us two solutions to the system of equations, but using the Principle of Superposition:
(which we can easily verify), we can take linear combinations of our solutions, to get the general solution to the system of equations
where [(v_{1})\vec], [(v_{2})\vec] are eigenvectors for the coefficient matrix A, with eigenvalues r_{1},r_{2}.
To solve an initial value problem, we find the general solution, and then plug t_{0} into the solution and set the vector equal to our initial values (u_{1}(t_{0}),u_{2}(t_{0})) . This gives us a pair of linear equations to solve, which we do using our earlier techniques.
Each solution is a pair of functions u = (u_{1}(t),u_{2}(t)), which, if we think of them as giving x and ycoordinates, describe a path in the plane. u^{¢} can then be interpreted as the velocity vectors of this path, which are vectors tangent to the path. since we have a solution to our differential equation (**), u^{¢} is actually Au, which we can imagine computing for every point in the plane. Plotting each of these vectors A[v\vec] with its tail at [v\vec] gives us a vector field in the plane, which we call the direction field of the system of equations. The solutions to (**) are the paths whose velocity vectors are equal to this direction field (and in particular is tanget to the vector field). A picture of the direction field, together with a representative collection of solution curves, is called a phase portrait for the system of equations.
Our fundamental solutions u_{i} = e^{ri t}[(v_{i})\vec] give very special solution curves; the ycoordinate is a (constant) multiple of the xcoordinate, so it parametrizes a straight line out from the origin. With two eigenvalues, we have two straight line solutions to the system of equations. Every other solution will be curved (*actually, this isn't quite true; things are much different if one of the eigenvalues is 0; then every solution is a straight line (check it out!)*). We can understand the other solutions in terms of their behavior as t®¥ and t® ¥.
We have two lines L_{1} and L_{2} coming from the eigenvalues r_{1} and r_{2}. If r_{1} > r_{2}, then as t®¥, e^{r1 t}/e^{r2 t} = e^{(r1r2)t}® ¥ and so in any solution
i.e., the solution will turn parallel to [(v_{1})\vec]. A similar argument shows that as t® ¥, the solutions will turn parallel to [(v_{2})\vec].
The shape of the solutions also depend on the signs of the eigenvalues. If they are both negative, every solution tends toward the origin as t®¥, and head to ¥ in the other direction. If one is positive and one negative, then the solutions (other than the straight line ones) tend to ¥ in both directions. If both are positive, then every solution tends to ¥ as t®¥, and heads to the origin in the other direction.
All of this analysis has the assumption that the eigenvalues for our matrix A are distinct real numbers. We have yet to deal with the other two possibilities: the eigenvalues are complex (conjugates), or are equal.

 


we do what we did for second order equation. We just assume that the solution is
To find [v\vec], we solve A[v\vec] = (a+bi)[v\vec] as before, except that in this case the coordinates of [v\vec] will be complex numbers. (As before, the second equation is redundant.) If we write
 

 

we can write this as
Then we use the useful fact: if u = x+iy solves the equation u^{¢} = Au , where A has real entries, then x and y also solve the system of equations. (This uses the fact that eigenvalues come in complex conjugate pairs!) These two vector functions are our fundamental solutions. Each coordinate of these functions is a linear combination of the functions e^{at}cos(bt) and e^{at}sin(bt), and so the phase portrait of such a system of equations invloves both a circular motion and and expension from or contraction towards the origin (depending on whether a is positive or negative). The solution curves are spirals around the origin.
 

 

 

In every other case, we guess that the other solution is u = t e^{rt}[w\vec] . It turns out, however, that this won't work: what we instead need to do is to guess
u = t e^{rt}[v\vec] + e^{rt}[w\vec] , where v is the same eigenvector with eigenvalue r that we already found!
Carrying this expression through the equation u^{¢} = Au , we find that [w\vec] must satisfy
Since we know r and [v\vec] , we can solve this for [w\vec] . (In general, linear algebra tells us we shouldn't always expect to be able to solve such an equation, but because of the repeated root, it turns out than in fact we can.) This gives us our second fundamental solution.
If we look at the phase portrait for such an equation, we find that for the solution
as t®¥ the solution curves (with c_{2} > 0) run parallel to [v\vec], while as t® ¥, they run parallel to [v\vec] , that is, the term t[v\vec] will dominate. If c_{2} < 0, the roles of ±[v\vec] are reversed. Notice that c_{2} = 0 gives the straight line solution(s) u = ±e^{rt}[v\vec]. The solutions will tend to the origin as t goes to ¥ or ¥ , depending on the sign of r.