Math 221

Topics for the third exam

Chapter 3: Second Order Linear Equations

§ 8:
Spring - Mass problems
Basic setup: an object with mass m suspended on a spring. At rest, the mass stretches the spring by a length L. The mass is then displaced from this equilibrium position and released (with some initial velocity). Position at time t is u(t) .

Newton: m u¢¢ = sum of the forces acting on the object. These include:

gravity: Fg = mg

the spring: Fs = -k(u+L) (Hooke's Law)

friction: Ff = -gu¢

a possible external force: Fe = f(t)

Equilibrium: gravity and spring forces balance out; mg-kL = 0 (use to compute k !)

So: m u¢¢ = -ku-gu¢ + f(t), i.e.,

m u¢¢+ku+gu¢ = f(t)

Some special cases:

No friction (g = 0) = undamped, no external force (= free vibration); solutions are

u = c1cos(w0 t)+c2sin(w0 t) = C cos(w0 t-d)

where w = [Ö(k/m)] = the natural frequency of the system, C = amplitude of the vibration, d (= `delay') = phase angle

C = Ö(c12+c22), tan(d) = c2/c1

T = 2p/w0 = period of the vibration. Note: stiffer spring (= larger k) gives higher frequency, shorter period. Larger m gives the opposite.

Damped free vibrations; solutions depend on g2-4km = discriminant

g2 > 4km (overdamped); fundamental solutions are er1 t, er2 t, r1,r2 < 0

g2 = 4km (critically damped); fundamental solutions are ert, tert, r < 0

g2 < 4km (underdamped); fundamental solutions are ertcos(wt), ertsin(wt), r < 0 , w = Ö{w02-(g/2m)2}

In each case, solutions tend to 0 as t goes to ¥. In first two cases, the solution has at most on local max or min; in the third case, note that the frequency of the periodic part of the motion is smaller than the natural frequency. T = 2p/w is called the quasi-period of the vibration.

§ 9:
Forced vibrations
Focus on periodic forcing term: f(t) = F0cos(wt) .

Undamped: if w ¹ w0, then (using undetermined coefficients) solution is

u = Ccos(w0 t-d) + [(F0)/(m(w02-w2))]cos(wt)

This is the sum of two vibrations with different frequencies.

In the special case u(0) = 0, u¢(0) = 0 (starting at rest), we can further simplify:

u = [(2F0)/(m(w02-w2))] sin([(w0-w)/2]t)sin([(w0+w)/2]t)

When w is close to w0, this illustrates the concept of beats; we have a high frequency vibration (the second sine) with amplitude a low frequency vibration (the first sine). the mass essentially vibrates rapidly between to sine curves.

When w = w0, our forcing term is a solution to the homogeneous equation, so the general solution, instead, is

u = Ccos(w0 t-d) + [(F0)/(2mw0)]tsin(wt)

In this case, as t goes to ¥, the amplitude of the second oscillation goes to ¥; the solution, essentially, resonates with the forcing term. (Basically, you are `feeding' the system at it's natural frequency.) This illustrates the phenomenon of resonance.

Finally, if we include friction (g ¹ 0), then the solution turns out to be

u = homog. soln. + Ccos(wt- d), where

C = [(F0)/(Ö{m2(w02-w2)2+g2w2})], tan(d) = [(gm)/(m(w02-w2))]

But since g > 0, the homogeneous solutions will tend to 0 as t®¥; they are called the transient solution. (Basically, they just allow us to solve any initial value problem. We can then conclude that any energy given to the susystem is dissipated over time; leaving only the energy imparted by the forcing term to drive the system along.) The other term is called the forced response, or steady-state solution.

Note that the amplitude C of the forced response goes to 0 as the driving frequency, w, goes to ¥. Notice also that tan(d) can never be 0, so the forced response is always out of phase with the forcing term. When we are driving the system at it's natural frequency w0, the system is 90 degrees out of phase; as w®¥, the system approaches being 180 degrees out of phase, i.e., the motion of the mass is almost exactly opposite to the force being externally applied!


Chapter 7: Systems of first order linear equations

§ 1:
Introduction
Basic idea: we have two (or more) quantities, with their rates of change depending upon one another.

Ex: multiple spring - mass system:

WALL - spring - mass - spring - mass - spring - WALL walls are A units apart

u1 = position of first mass, u2 = position of second mass, then we find , by analyzing the forces involved, that

m1u1¢¢ = -k1 u1 + k2 (u2-u1) -g1 u1¢ hskip.2in m2u2¢¢ = -k2(u2-u1) +k3(A-u2) -g2 u2¢

where the symbols have similar meanings to our ordinary situation. The appropriate notion of an initial value problem would be to know the initial positions and initial velocities of both masses at a fixed time.

Ex: mixing with multiple tanks:

Tank 1 flows to tank 2 with rate r1, 2 to 3 with rate r2, and 3 to 1 with rate r3, then if the u's are the amount of solute in each tank, and the V's are the volumes in each, we have

u1¢ = [(r3)/(V3)]u3 - [(r1)/(V1)]u1 ,u2¢ = [(r1)/(V1)]u1 - [(r2)/(V2)]u2 , u3¢ = [(r2)/(V2)]u2 - [(r3)/(V3)]u3

Here the appropriate notion of initial value problem is to know the values of each of u1,u2,u3 at a fixed time.

An important class of examples: any ordinary differential equation

y(n) = f(t,y,y¢,¼,y(n-1))

can be turned into a system of first order equations by setting

u1 = y,u2 = y¢,¼, un-1 = y(n-1)

giving the system of equations

u1¢ = u2, u2¢ = u3, ¼, un-1¢ = f(t,y,y¢,¼,y(n-1))

§ 2:
Matrices
The best way to develop techniques for solving systems of equations is with matrices. Notationally, a system of equations

u1¢ = 2u1+3u2 u2¢ = 3u1-u2

can be expressed as

(
u1¢
u2¢
) = æ
ç
ç
ç
è
2
3
3
-1
ö
÷
÷
÷
ø
æ
ç
ç
ç
è
u1
u2
ö
÷
÷
÷
ø
, or, writing u = (
u1
u2
) and using the derivative of a vector-valued function, u¢ = (
2
3
3
-1
)u = Au . The square of numbers is called a (coefficient) matrix. We actually want to think of Au as a real multiplication; the basic idea is that the i-th entry of Au is the i-th row of A times u . This in turn should be familiar as the dot product of the row of A and u.

More generally, we can multiply matrices, getting another matrix. Using the notation A = (aij), where aij is the entry in the i-th row and j-th column of A, we write

AB = (cij), where cij = the dot product of the i-th row of A and the j-th column of B .

Amazingly, this product has alot of the properties we are used to: if we add matrices by adding their ij-th entries (to get the ij-th entry of the sum), and multiply by a scalar c by multliplying each entry by c, then we have:

(AB)C = A(BC) , A(B+C) = AB+AC , A(cB) = c(AB) , c(A+B) = cA+cB

However what we do not NOT NOT have is AB = BA; usually, matrix multiplication does not NOT NOT commute! We will need the special matrix I = (dij), where dij = 1 if i = j, and 0 if i ¹ j . This matrix has the property that IA = AI = A for all matrices A, i.e., it acts like the number 1 under multiplication.

If A is a matrix whose entries are functions (aij(t) = A(t), then we can make sense of its derivative; A¢(t) = (aij¢(t)) . Then we have the properties:

(A+B)¢(t) = A¢(t)+B¢(t) , (AB)¢(t) = A¢(t)B(t)+A(t)B¢(t)

Note the order of multiplication in the second formula; it's important!

§ 3:
Eigenvectors and eigenvalues
We shall see that our approach to solving u¢ = Au, is, like in Chapter 1, to find solutions of the form u = ekt[v\vec]0 for a (non-zero) vector [v\vec]0 and number k. These vectors and numbers will be determined by A; they will come from solving

A[v\vec]0 = k[v\vec]0

Such vectors are called eigenvectors, and their associated numbers k are called eigenvalues. Our approach to finding such vectors and numbers is to write the equation as

(A-kI)[v\vec]0 = [0\vec] (*)

First we find the right values k. Linear algebra teaches us that (*) has a (non-zero vector) solution exactly when det(A-kI) = 0, where "det" stands for the determinant. We have already run into this concept; for a 2-by-2 matrix

A = (
a
b
c
d
) , det(A) = ad-bc

There are similar formulas for n×n matrices for any n, but we will focus on n = 2 to make our calculations simpler. If we compute det(A-kI), we find that (*) has a solution exactly when

det(A-kI) = k2-(a+d)k+(ad-bc) = 0

This is a quadratic polynomial, so it (usually) has two roots, r1,r2 . Once we have the roots, we go back and solve (A-ri I)[v\vec] = [(\vec]0), i.e,

ax1+bx2 = ri x1 , cx1 + dx2 = ri x2

It turns out that the second equation is always redundant (although this is really only true of 2×2 systems), and we can find [v\vec] by choosing any convenient (non-zero) value for x1 or x2, and solving for the other.

§ 5:
Homogeneous linear systems with constant coefficients
Now we put all of this technology to work, to solve our system of differential equations

u¢ = Au (**)

where A is a matrix whose entries are all constants. Our basic procedure will be to guess that the solution is u = ekt[v\vec]0 for some vector (with constant entries) [v\vec]0. If we plug such a function into (**), we find that

u¢ = ku = ekt(k[v\vec]0), while Au = ekt(A[v\vec]0)

and so we find that we need A[v\vec]0 = k[v\vec]0 . In other words, we need the eigenvalues for A, and their corresponding eigenvectors. This gives us two solutions to the system of equations, but using the Principle of Superposition:

If u1, u2 are solutions to the linear system of equations u¢ = Au, then the functions c1 u1+c2 u2 are also solutions, for any constants c1,c2 .

(which we can easily verify), we can take linear combinations of our solutions, to get the general solution to the system of equations

u = c1 er1 t[(v1)\vec] + c2 er2 t [(v2)\vec]

where [(v1)\vec], [(v2)\vec] are eigenvectors for the coefficient matrix A, with eigenvalues r1,r2.

To solve an initial value problem, we find the general solution, and then plug t0 into the solution and set the vector equal to our initial values (u1(t0),u2(t0)) . This gives us a pair of linear equations to solve, which we do using our earlier techniques.

Each solution is a pair of functions u = (u1(t),u2(t)), which, if we think of them as giving x- and y-coordinates, describe a path in the plane. u¢ can then be interpreted as the velocity vectors of this path, which are vectors tangent to the path. since we have a solution to our differential equation (**), u¢ is actually Au, which we can imagine computing for every point in the plane. Plotting each of these vectors A[v\vec] with its tail at [v\vec] gives us a vector field in the plane, which we call the direction field of the system of equations. The solutions to (**) are the paths whose velocity vectors are equal to this direction field (and in particular is tanget to the vector field). A picture of the direction field, together with a representative collection of solution curves, is called a phase portrait for the system of equations.

Our fundamental solutions ui = eri t[(vi)\vec] give very special solution curves; the y-coordinate is a (constant) multiple of the x-coordinate, so it parametrizes a straight line out from the origin. With two eigenvalues, we have two straight line solutions to the system of equations. Every other solution will be curved (*actually, this isn't quite true; things are much different if one of the eigenvalues is 0; then every solution is a straight line (check it out!)*). We can understand the other solutions in terms of their behavior as t®¥ and t® -¥.

We have two lines L1 and L2 coming from the eigenvalues r1 and r2. If r1 > r2, then as t®¥, er1 t/er2 t = e(r1-r2)t® ¥ and so in any solution

c1 er1 t[(v1)\vec] + c2 er2 t [(v2)\vec], the c1 er1 t[(v1)\vec] term will `dominate'

i.e., the solution will turn parallel to [(v1)\vec]. A similar argument shows that as t® -¥, the solutions will turn parallel to [(v2)\vec].

The shape of the solutions also depend on the signs of the eigenvalues. If they are both negative, every solution tends toward the origin as t®¥, and head to ¥ in the other direction. If one is positive and one negative, then the solutions (other than the straight line ones) tend to ¥ in both directions. If both are positive, then every solution tends to ¥ as t®¥, and heads to the origin in the other direction.

All of this analysis has the assumption that the eigenvalues for our matrix A are distinct real numbers. We have yet to deal with the other two possibilities: the eigenvalues are complex (conjugates), or are equal.

§ 5:
Complex eigenvalues
To deal with complex eigenvalues a±bi , for example

A = (
1
-4
3
-3
) , which has eigenvalues 1±3i

we do what we did for second order equation. We just assume that the solution is

u = e(a+bi)t [v\vec]0 = eat(cos(bt)+isin(bt))([v\vec]

To find [v\vec], we solve A[v\vec] = (a+bi)[v\vec] as before, except that in this case the coordinates of [v\vec] will be complex numbers. (As before, the second equation is redundant.) If we write

[v\vec] = (
v1
v2
), so u = (
(cos(bt)+isin(bt))[(v1)\vec]
(cos(bt)+isin(bt))[(v2)\vec]
)

we can write this as

u = x+iy , where x and y have real entries

Then we use the useful fact: if u = x+iy solves the equation u¢ = Au , where A has real entries, then x and y also solve the system of equations. (This uses the fact that eigenvalues come in complex conjugate pairs!) These two vector functions are our fundamental solutions. Each coordinate of these functions is a linear combination of the functions eatcos(bt) and eatsin(bt), and so the phase portrait of such a system of equations invloves both a circular motion and and expension from or contraction towards the origin (depending on whether a is positive or negative). The solution curves are spirals around the origin.

§ 5:
Repeated eigenvalues
If the coefficient matrix has only one eigenvalue r, occuring twice, then using that eigenvalue we can find an eigenvector [v\vec] and a solition u = ert[v\vec]. But only if A = rI (in the 2×2 case, will we be able to find two independent solutions; in that case we can actually take

u = c1 rrt(
1
0
) + c2 ert(
0
1
) = (
c1 ert
c2 ert
)

In every other case, we guess that the other solution is u = t ert[w\vec] . It turns out, however, that this won't work: what we instead need to do is to guess

u = t ert[v\vec] + ert[w\vec] , where v is the same eigenvector with eigenvalue r that we already found!

Carrying this expression through the equation u¢ = Au , we find that [w\vec] must satisfy

(A-rI)[w\vec] = [v\vec]

Since we know r and [v\vec] , we can solve this for [w\vec] . (In general, linear algebra tells us we shouldn't always expect to be able to solve such an equation, but because of the repeated root, it turns out than in fact we can.) This gives us our second fundamental solution.

If we look at the phase portrait for such an equation, we find that for the solution

u = c1 ert[v\vec] + c2 (t ert[v\vec] + ert[w\vec]) = ert[(c1+c2 t)[v\vec] +c2[w\vec]]

as t®¥ the solution curves (with c2 > 0) run parallel to [v\vec], while as t® -¥, they run parallel to -[v\vec] , that is, the term t[v\vec] will dominate. If c2 < 0, the roles of ±[v\vec] are reversed. Notice that c2 = 0 gives the straight line solution(s) u = ±ert[v\vec]. The solutions will tend to the origin as t goes to ¥ or -¥ , depending on the sign of r.


File translated from TEX by TTH, version 0.9.