(6) Draw the vector from the point (0,0) to the point (0,4), then draw the vector from (0,4) to (5/2

(16) -3(a-c)+2(a+2b)+3(c-b) = Distributive law -3a+3c+2a+4b+3c-3b = Commutative law -3a+2a+4b-3b+3c+3c = Distributive law -a+b+6c(20) Draw a line through the points (0,0) and (-2,3). This gives the axis along the vector u. Now draw a line through the points (0,0) and (2,1). This gives the axis along the vector v. Now draw the parallelogram determined by the points (2,-3) (which comes from -u), (-4,-2) (which comes from -2v), and (0,0). The fourth point of the parallelogram is (-4+2,-2-3) = (-2,-5), which gives w.

(22) This is like (20): find the parallelogram whose sides are on the lines through u and v and whose fourth point is (2,9), which is given by w. You'll see it looks like you have to go along the u-line by a total of 2u and along the v line a total of 3v. Thinking of u and v as row vectors (since they're easier to type than the column vectors the book uses) we check: 2u+3v = 2(-2,3)+3(2,1)=(-4+6,6+3)=(2,9)=w, so w = 2u+3v.

1.2:

(2) u.v=(3,-2).(4,6)=3*4-2*6=0

(8) ||u|| = (u.u)

(14) d(u,v) = ||u-v|| = ||(-1,-8)|| = 65

(18) -3 = u.v = ||u||*||v||*cos(t), so the cosine of the angle t is negative, so the angle is obtuse.

(34) proj

46(b) ||u+v|| = ||u|| - ||v|| if and only if u+v=0, or v=0, or neither is 0 but the angle between them is 180 degrees.

Proof: ||u+v|| = ||u|| - ||v|| if and only if ||u|| = ||u+v|| + ||v||. Squaring both sides gives: u.u = (u+v).(u+v) + v.v + 2||u+v||*||v|| which simplifies to 0 = v.(u+v) + ||u+v||*||v||, or to -v.(u+v) = ||u+v||*||v||, hence -||u+v||*||v||*cos(t) = ||u+v||*||v||. This is true if and only if either u+v=0, or v=0, or neither is 0 but the angle between them is 180 degrees (i.e., cos(t) = -1).

Here is an alternate solution, using our answer for 46(a) from class. (Recall that in class we showed that ||u+v|| = ||u|| + ||v|| if and only if u=0, or v=0, or neither is 0 but the angle between them is 0.)

46(b) ||u+v|| = ||u|| - ||v|| if and only if u+v=0, or v=0, or neither is 0 but the angle between them is 180 degrees.

Proof: ||u+v|| = ||u|| - ||v|| if and only if ||u+v|| + ||v|| = ||u|| if and only if ||u+v|| + ||-v|| = ||u|| if and only if ||a|| + ||b|| = ||a+b||, where a=u+v, and b=-v. Using our answer to 46(a), we see ||a|| + ||b|| = ||a+b|| if and only if a=0, or b=0, or neither is 0 but the angle between them is 0. I.e., u+v=0, or -v=0, or neither is 0 but the angle between u+v and -v is 0. But -v=0 is the same thing as v=0, and the angle between the angle between u+v and -v being 0 is the same thing as the angle between u+v and v being 180 degrees.

1.3:

18(a) The direction vector d for the line l is the normal vector for the plane P, so l is perpendicular to P.

(32) Pick a point P on the line L. Let p be the vector from the origin to P. Let v be the vector from P to Q. Project v onto a direction vector d for the line l; i.e., let u be proj

(34) Here pick a point P on the plane (say P = (1,0,0)). Let p be the vector from the origin to P. Let v be the vector from P to Q = (0,0,0), so v = Q-P = (-1,0,0). Project v onto a normal vector n for the plane; i.e., let u be proj

(44) Just find the absolute value of the angle between the normals; i.e., the cosine of the acute angle is |(3,-1,2).(1,4,-1)|/(||(3,-1,2)||*||(1,4,-1)||) = 3/(14*18)

48(a) Just project v = (1,0,-2) onto the normal n = (1,1,1), and subtract that from v; i.e., v - proj

2.1:

(16) The two lines have different slopes so they intersect at one point. Solving shows that point to be x=3, y=-2.

(32) Let's use w, x, y, and z as our variableas:

v-w+3y+z=2

v+w+2x+y-z=4

w+2y+3z=0

2.2:

(14)

[-2 -4 7] [-3 -6 10] [ 1 2 -3] Put bottom row on top: [ 1 2 -3] [-2 -4 7] [-3 -6 10] Add twice top row to middle row, and three times top row to bottom row: [ 1 2 -3] [ 0 0 1] [ 0 0 1] Subtract middle row from bottom row: [ 1 2 -3] [ 0 0 1] [ 0 0 0] This is now in row echelon form. Add 3 times middle row to top row: [ 1 2 0] [ 0 0 1] [ 0 0 0] This is now in reduced row echelon form.

(18) The way to do this one is to transform both to reduced row echelon form, R. Since A and B are row equivalent, they have the same reduced row echelon form. Now you know how to change A to R, and B to R. Use the operations you found to go from A to R, and use the inverse of the operations you found, in the reverse order, to go from R back to B. This shows how to go from A to B; i.e., go via R. Using R

[1 1 0] [0 2 1] [0 0 0].Likewise, R

[1 1 0] [0 2 1] [0 0 0].Thus R

[1 1 0] [0 2 1] [0 0 0]and now R

[1 1 0] [0 2 1] [0 0 0]back into B.

(28) The augmented matrix for the system of equations is

[ 2 3 -1 4 | 0] [ 3 -1 0 1 | 1] [ 3 -4 1 -1 | 2].Using the operations R

[ 1 0 0 1 | 1/2] [ 0 1 0 2 | 1/2] [ 0 0 1 4 | 5/2].The variables are w, x, y and z, so the giving the leading varibales in terms of the free variable z, the solution is y = (5/2) - 4z, x = 1/2 - 2z, and w = 1/2 - z. In vector form, the solution is

[w] [1/2] [-1] [x] = [1/2] + z[-2] [y] [5/2] [-4] [z] [ 0 ] [ 1]

(42) The augmented matrix for the system of equations is

[ 1 -2 3 | 2 ] [ 1 1 1 | k ] [ 2 -1 4 | kThe row operations R^{2}].

[ 1 -2 3 | 2 ] [ 0 3 -2 | k-2 ] [ 0 0 0 | k^{2}- k - 2].

(a) Having no solution means having a leading entry in the augmented column; i.e., we need k

(b) Having a unique solution means having no free variable, but no matter what k is, z is a free variable, so there is no value of k for which the solution is unique.

(c) By (a) and (c), we see that there are infinitely many solutions exactly when k is 2 or -1.

2.3:

(8) Just see if you can solve

[ 1 2 3 | 10] [ 4 5 6 | 11] [ 7 8 9 | 12]The fact is that b is in the span of the columns of A if and only if this system is consistent. It is consistent. If we call our variables x, y and z, then x = -28/3, y=29/3, z-0 is a solution. This even tells us how to write b in terms of the columns of A: (-28/3)u + (29/3)v + 0w = b, where u, v and w are the columns of A.

(12) By the theorem in class all we need to do is check that the matrix whose columns are the given vectors has rank 3, which it does, since the matrix row reduces to

[ 1 0 0] [ 0 1 0] [ 0 0 1].

(20) (a) To show that span(S) is contained in span(T), we have to show that every element of span(S) is an element of span(T). So let w be an element of span(S). Thus w is a linear combination of u

(b) Every vector in span(T) is in

(24) We are given three column vectors. I'll write them as row vectors; they are (2, 2, 1), (3, 1, 2) and (1, -5, 2). To check if they are linearly dependent and to find a dependence if they are, make an augmented matrix out of them with augmented column all zeros, and solve the system by row reducing:

[ 2 3 1 | 0] [ 2 1 -5 | 0] [ 1 2 2 | 0].This row reduces to:

[ 1 0 -4 | 0] [ 0 1 3 | 0] [ 0 0 0 | 0].This has a nonzero solution. If we call the variables x, y and z, one nonzero solution is z = 1, x = 4 and y = -3. This means that 4(2, 2, 1) - 3(3, 1, 2) + (1, -5, 2) = (0,0,0). This is a dependence relationship, which we could write this way, showing that the third vector is a linear combination of the first two: (1, -5, 2) = -4(2, 2, 1) + 3(3, 1, 2).

3.1:

(10) Since F is a 2x1 matrix and D is a 2x2 matrix, we see DF is a 2x1 matrix, and we cannot multiply a 2x1 matrix F times another 2x1 matrix DF, so F(DF) is not defined.

(18) We want to find 2x2 matrices B and C such that Ab = AC but B is not equal to C, where A is as given below. (There are many different answers!)

A=[2 1], and take B=[ 2 0] and C=[0 0]. [6 3] [-1 0] [0 0] Then AB and AC both are the 0 matrix.(30) Assume that A and B are matrices such that the product AB is defined. If the rows of A are linearly dependent, prove that the rows of AB are also linearly dependent.

(36) We want to calculate B^{2001}, where B=[c -c] and where c = 1/2^{1/2}. [c c] First note that B^{2}is [0 -1] and that B^{4}= (B^{2})^{2}is [1 0] [-1 0], so B^{8}= (B^{4})^{2}is [ 0 -1] [1 0] = I_{2}. Thus B^{2001}= [0 1] B^{8*250 + 1}= (B^{8})^{250}B =I_{2}B = B.

3.2:

(6) We must solve the system of linear equations given by xA

(14) Let A

3.3:

(12) Solve the equation

[1 -1][x(52) Find A_{1}] = [1]. Answer: [1 -1]^{-1}= (1/3)[ 1 1], so [2 1][x_{2}] [2] [2 1] [-2 1] [x_{1}] = (1/3)[ 1 1][1] = [1]. [x_{2}] [-2 1][2] [0]

[ 2 3 -3] [-1 -2 2] [ 4 6 -7]

3.5:

(20) The matrix A =

[ 2 -4 0 2 1] [-1 2 1 2 3] [ 1 -2 1 4 4]row reduces to R=

[ 1 -2 0 1 1/2] [ 0 0 1 3 7/2] [ 0 0 0 0 0 ]Thus the nonzero rows of R give a basis for row(A), and columns 1 and 3 of A (i.e., the columns of R with leading 1's) give a basis for col(A). The vector form of the solution of A

[x] [ 2] [-1] [-1/2] [y] [ 1] [ 0] [ 0 ]sox= [z] = y[ 0] + w[-3] + u[-7/2] [w] [ 0] [ 1] [ 0 ] [u] [ 0] [ 0] [ 1 ]

[-2] [-1] [-1/2] [ 1] [ 0] [ 0 ] [ 0], [-3], [-7/2] [ 0] [ 1] [ 0 ] [ 0] [ 0] [ 1 ]form a basis for null(A).

(28) Let A be the matrix whose columns are the given vectors, in the given order. When you row reduce A to get a row echelon matrix R, you see that the leading 1's are in columns 1, 2 and 3. Thus columns 1, 2 and 3 of A give a basis for col(A), which is just the span of the given vectors.

(34) The columns of a matrix A by definition span col(A). So if the columns of A are linearly independent, then they are a linearly independent spanning set for col(A), hence, by definition, a basis for col(A).

(40) A 4x2 matrix A has 4 row vectors in

(42) A 4x2 matrix A could row reduce to have either 0, 1 or 2 leading 1's. It's easy to write down row echelon matrices for each possibility. Thus the number of free variables for the corresponding homogeneous system of linear equations has either 2, 1 or 0 free variables, respectively. Thus null(A) is either 2, 1 or 0.

(44) Our matrix A is

[ a 2 -1] [ 3 3 -2] [-2 -1 a]It row reduces to

[ 1 2 a-2] [ 0 3 3a-4] [ 0 2a-2 (a-1)If a=1, this is:^{2}]

[ 1 2 -1] [ 0 3 -1] [ 0 0 0]which has rank 2. If a is not 1, we can divide the bottom row by a-1 and continue row reducing, to get:

[ 1 0 -a + 2/3] [ 0 1 a - 4/3] [ 0 0 3a - 5 ]This has rank 2 if a = 5/3, and rank 3 if a is not 5/3.

(48) The vectors do not form a basis since they are not linearly independent, as we can see by adding them up. The sum of the four vectors is the 0 vector. You can also make a matrix having the given vectors as columns. When you row reduce you find that there is a free variable, which again shows that the vectors are not linearly independent.

(50) Let

(58) (a) Prove that rank(AB) <= rank(A).

(b) Say A is a nonzero matrix (so rank(A) > 0) but B is the zero matrix (so AB is also the zero matrix, hence rank(AB) = 0). Then rank(AB) < rank(A).

3.6:

(4) By definition, using row vectors because they're easier to type than column vectors, T((x, y)) = (-y, x+2y, 3x-4y). Now consider T(c(a, b) + d(x,y)). It is, by definition, T(c(a, b) + d(x,y)) = T((ca+dx,cb+dy)) = (-(cb+dy),(ca+dx)+2(cb+dy),3(ca+dx)-4(cb+dy)). Using the usual rules or vector addition we can rewrite this as: (-(cb+dy),(ca+dx)+2(cb+dy),3(ca+dx)-4(cb+dy)) = (-(cb),(ca)+2(cb),3(ca)-4(cb))+(-(dy),(dx)+2(dy),3(dx)-4(dy)) = c(-(b),(a)+2(b),3(a)-4(b))+d(-(y),(x)+2(y),3(x)-4(y)) = cT((a, b)) + dT((x, y)). This shows that T is linear.

(10) To see that T is not linear notice that T((0, 0)) = (1, -1). But T(2(0, 0)) = T((0, 0)) = (1, -1). Thus 2T((0, 0)) is not equal to T(2(0, 0)), so T is not linear.

(30)

[ST] = [ 2 -1] [-1 -1] but [S] = [2 0] [0 -1] and [T] = [1 -1] [1 1], so [S][T] = [ 2 -2] [-1 -1]. I.e., [ST] = [S][T].

4.2:

(10) Expand along the first column to get (I'll use t instead of theta because theta is hrad to get on the web): cos(t) (cos

(12) Expand along the first column to get -b(a*0-e*0) = 0.

(14) Expand along the second column and then along the top row to get:

|2 3 -1| -(-1)|1 2 2| = (1)(2(-8)-3(-7)+(-1)(-3)) = 8. |2 1 -3|(26) Here det(A) = 0, since the top and bottom rows are linearly dependent.

(46) The matrix A is invertible if and only if det(A) is not zero. But det(A) here is: -k

(50) det(2A) = 2

(52) det(AA

(58)

| 5 -1| | 2 5| |-1 3| 14 | 1 -1| -7 x = _______ = ___ = 2, y = _______ = ___ = -1 | 2 -1| 7 | 2 -1| 7 | 1 3| | 1 3|

4.1:

(4) By just multiplying Av, we see we get Av = 3v. Since v is a nonzero vector, this shows that v is an eigenvector of eigenvalue 3.

(6) Here Av = 0. Since v is a nonzero vector, it is an eigenvector of eigenvalue 0.

(12) We need to solve A - 2I = 0. By row reducing, we find that all solutions are multiples of v = (0, 1, 1), this v is an eignevector of eigenvalue 2.

4.3:

(6) (a) The characteristic polynomial is det(A-tI). By computing the determinant and factoring, we get: -(t+1)(t-3)(t+1).

(b) The eigenvalues are thus -1, -1, and 3 (so -1 has algebraic multiplicity 2, and 3 has algebraic multiplicity 1 and thus also geometric multiplicity 1).

(c) A basis for the eigenspace for 3 is given by any eigenvector with eigenvalue 3. We can get one either by solving A-3I = 0 or by inspection: (2,3,2). A basis for the eigenspace for -1 consists of either 1 or 2 vectors (since the algebraic multiplicity is 2). This time we must solve A-(-1)I = 0. The basis for the nullspace is the basis for our eigenspace. We get { (0,1,0), (1,0,-1) }.

(d) Part (c) showed that the geometric multiplicity of -1 is 2.

(10) The matrix A and also A-tI are upper triangular, so their determinants are given by multiplying the diagonal entries.

(a) Thus det(A-tI) = (2-t)(1-t)(3-t)(2-t).

(b) The eigenvalues are thus: 2, 1, 3, and 2. So 2 has algebraic multiplicity 2 and 1 and 3 have algebraic multiplicity 1 (and also geometric multiplicity 1).

(c) A basis for the t=1 eigenspace is given by (1, -1, 0, 0). A basis for the t=3 eigenspace is given by (3, 2, 1, 0). A basis for the t=2 eigenspace is given by {(1, 0, 0, 0), (0, 1, -1, 1)}.

(d) Part (c) shows that the geometric multiplicity of 2 is 2.

(18) Note that we can write

4.4:

(2) The matrices A and B are

[ 3 -1], [ 2 1]. [-5 7] [-4 6]By Theorem 4.22, p. 299, we see that A and P are not similar, since the eigenvalues for A are 2 and 8, but for B they are 4 and 4. (A useful fact is that the determinant of a square matrix is the product of the eigenvalues, while the trace, which is the sum of the entries on the diagonal, is the sum of the eigenvalues. You can use this to check your work when you find eigenvalues. For 2x2 matrices you can even often guess the eigenvalues. For example, the trace of A is 10 and its determinant is 16. Thus the eigenvalues must be 2 and 8.)

(6) The eigenvalues for the matrix

[1 1 1] [0 0 1] [1 1 0]are 2, 0 and -1. The corresponding eigenvectors are:

[3] [ 1] [ 0] [1], [-1], [ 1] [2] [ 0] [-1](8) The matrix A is

[5 2]. [2 5]Since it is real and symmetric we know it is diagonalizable. The determinant is 21 and the trace is 10, so we can guess that the eigenvalues are 3 and 7. (We could also work it out by computing det(A-tI).) The corresponding eigenvectors are:

[ 1] [ 1] [-1], [ 1]Thus P and D are:

[ 1 1], [ 3 0]. [-1 1] [ 0 7](10) The matrix

[3 1 0] A = [0 3 1] [0 0 3]has only one eigenvalue, 3, and rank(A-3I) = 2 so nullity(A-3I) = 1, hence dim E

(20) Here we have

[2 1 2] A = [2 1 2] [2 1 2]The eigenvalues are 0, 0 and 5 and the corresponding eigenvectors are:

[ 1] [ 1] [1] [ 0], [-2], [1]. [-1] [ 0] [1]Thus P, D and P

[ 1 1 1] [ 0 0 0] [2 1 -3] [ 0 -2 1], [ 0 0 0], (1/5)[1 -2 1]. [-1 0 1] [ 0 0 5] [2 1 2]Now A

5.1:

(10) Since

(20) To check if the given matrix (call it A) is orthogonal, just check to see if each column is orthogonal to each of the other columns (they are), and if each column is a unit vector (they all are). Thus A is an orthogonal matrix, so A

(26) If Q is an orthogonal matrix and if A is obtained from Q by rearranging the rows of Q, prove that A is an orthogonal matrix.

Proof: Since Q is orthogonal, we know Q

(28) (a) Let A be an orthogonal 2x2 matrix. We know the first column, [a b]

(b) Since [a b]

(c) The case that [c d] = [-sin(x) cos(x)] is the case that A defines a rotation (see p. 215). The matrix A defines a reflection T

(d) What we did above shows that if [c d] = [-b a], then T

5.2:

(2) The orthogonal complement W

(12) To find a basis for the orthogonal complement of W (note: I will denote the orthogonal complement of W by W

[1 -1 3 -2] [0 1 -2 1]Row reducing and solving A

(16) The orthogonal projection of

(22) First find

(26) Let {

Proof: Since {

5.3:

(6) Here

(8) First find the projection of

(12) Find an orthogonal basis for

As an alternative, find an orthogonal basis for span(

5.4:

(6) The eigenvalues of the matrix are 2, 7 and -3. Note that the trace is 6, which agrees with the sum of the eigenvalues. Corresponding eigenvectors are [4 0 -3], [3 5 4], and [3 -5 4]. The results of section 5.4 tells us that these eigenvectors should be orthogonal, and a quick check shows that they are. The matrix Q is obtained by dividing each eigenvector by its length, and using it as a column of Q. The eigenvalues give the diagonal entries of D.

(8) The eigenvalues of the matrix are -1, -1 and 5. Note that the trace is 3, which agrees with the sum of the eigenvalues. Corresponding eigenvectors are [1 -1 0], [1 0 -1], and [1 1 1]. The first two eigenvectors need not be orthogonal to each other (and the ones I just gave are not), but by the results of section 5.4 both are orthogonal to the third. Before we can get Q, we need to find an orthogonal basis of E

(14) If A is invertible, orthogonally diagonalizable and real, show that A

(24) We are given eigenvalues 0, -4 and -4, and corresponding eigenvectors [4 5 -1], [-1 1 1] and [2 -1 3]. Let D be the diagonal matrix whose diagonal entries are 0, -4 and -4. Divide each eigenvector by its length, and make a matrix Q whose columns are these unit vectors. Then Q is orthogonal, and A = QDQ

6.1:

(36) The set W={a+bx+cx

(54) The polynomial 1+1x+1x

(56) The answer is: h(x) = cos(2x) is in Span(sin

(62) The polynomials 1+1x+2x