[an error occurred while processing this directive] [an error occurred while processing this directive]


QuestionofDay

Question of the Day


Key Concepts

Key Concepts

  1. Basic Matrix Operations and Forms
  2. Eigenvalues and Eigenvectors.
  3. Special Matrix Forms and Properties


Vocabulary

Vocabulary

  1. x, a nonzero vector in  n R , is an eigenvector with eigenvalue c (- R if:
    Ax = cx
  2. We say A is diagonal if it is zero for all non-diagonal entries, that is, aij = 0 for i /= j . This however does not mean each diagonal entry must be nonzero. We say A is an upper-triangular matrix if it has all zero entries below the diagonal, that is, aij = 0 for i > j . Similarly, we say A is lower-triangular if all entries above the diagonal are zero , that is, aij = 0 for i < j .


Mathematical Ideas

Mathematical Ideas

This section was written by LT Grant and is used with permission. Pieces of this section are adapted from: Elementary Linear Algebra, Richard Hill, 1996.

Basic Matrix Ideas

A rectangular array of real numbers with m rows and n columns is an m × n matrix A. We denote the matrix and its elements as follows:

 ( ) a11 a12 ... a1n a21 a22 ... a2n A = .. .. ... .. . . . am1 am2 ... amn

We will mention a few standard forms for A as a n × n square matrix, although most of these terms have parallels in non-square matrices. We say A is diagonal if it is zero for all non-diagonal entries, that is, aij = 0 for i /= j . This however does not mean each diagonal entry must be nonzero. We say A is an upper-triangular matrix if it has all zero entries below the diagonal, that is, aij = 0 for i > j . Similarly, we say A is lower-triangular if all entries above the diagonal are zero , that is, aij = 0 for i < j . The following three matrices are examples of each type respectively:

( ) ( ) ( 9 0 0 0) 3 0 0 -4 16 2 0 - 2 0 , 0 7 8 , 0 2 0 0 0 0 1 0 0 1 -1 3 1 0 1 - 3 5 7

The transpose of a matrix A is denoted AT . Thus if we consider A as the m × n matrix defined above, then AT is an n × m matrix with entries aijT = aji. One can think about AT as the matrix A “flipped” about the diagonal entries.

In order to solve a linear system, Ax = b, one (computationally inefficient and inaccurate but theoretically simple) way is to take the inverse of the matrix A. However, this is only possible if A is nonsingular, or equivalently, detA /= 0 . To calculate the inverse, one can either perform the operation by hand, or use a computer program such as MATLAB or Maple.

Computing the inverse of even small matrices by hand is tedious, and the use of the computer helps to speed this process up considerably. However, one should be aware that for large matrices computing inverses can be a very difficult, inaccurate, misleading and time consuming task. Thus we almost always “transform” our matrix into a “nicer” form in order to save computational steps. In doing this we often can get additional results and facts about our problem that would not have been made clear by just using the computer. What makes a “nicer” matrix depends on the context and that determines the nature of the transformation to obtain the “nicer” from.

Row Operations

In order to transform matrices into “nicer” forms we perform row (and column) operations. In simple terms this consists of either multiplying a row of the matrix by a constant, adding one row to another, or swapping rows. These same operations are available for column operations.

More formally one can represent any such row operations as left multiplication by a nonsingular matrix. For example, given a matrix A, if we multiply BA, this will double the first row, subtract the second from the third, and swap the fourth and the fifth.

 ( ) 2 0 0 0 0 0 1 0 0 0 B = 0 - 1 1 0 0 0 0 0 0 1 0 0 0 1 0

We can also represent column operations by right multiplication. That is AB would double the first column, and so on.

Linear Dependence

We say two vectors, u,v are linearly dependent if there are two constants c1,c2, both of which cannot be zero, such that c1u + c2v = 0. If there are no two numbers which do this, then we say u,v are linearly independent. This idea extends to sets of n vectors. For example if {v1,...,vn} is a set of linearly independent vectors and

c1v1 + c2v2 + ...+ cnvn = 0

then we know that c1 = c2 = ... = cn = 0.

An equivalent interpretation is that a set of vectors is linearly dependent if one vector in the set can be written as a linear combination of the other vectors.

One might wonder how large sets of linear independent vectors can be, and there is an upper limit. Consider any set of 3 distinct 2 × 1 vectors, say u1,u2,u3. No matter which three vectors you choose, you will always be able to write one as a linear combination of the other two. In general, when working in  n R , any set of n + 1 vectors must be linearly dependent. This however does not guarantee that sets of size smaller than n will necessarily be linearly independent, consider:

 ( ) ( ) ( ) 1 1 0 u = 0 u = 0 u = 0 1 -1 2 0 3 1 0 0 0

We note that u3 = u2 -u1.

Eigenvalues & Eigenvectors

Let A be an n × n matrix. Then we say that x, a nonzero vector in Rn , is an eigenvector with eigenvalue c (- R if:

Ax = cx

We call (x,c) an eigenpair for the matrix A. We note that while we require the vector x to be nonzero, c may be. For an n × n matrix we will have n, not necessarily distinct, eigenvalues. To find the eigenvalues of a matrix we solve the equation:

det(A - cI) = 0

Example 1 Find all the eigenvalues of the following matrix:

 ( ) A = 5 6 2 1

We begin by using the formula above:

 (( ) ( )) (( )) det(A - cI) = det 5 6 - c 1 0 = det 5 - c 6 = 2 1 0 1 2 1 - c (5- c)(1 - c) - (6)(2) = c2 - 6c - 7 = (c - 7)(c + 1) = 0 ==> c1 = 7,c2 = - 1

While we can find all eigenvalues in one set of operations, we must treat each eigenvalue individually when we look for our eigenvectors. So with a fixed eigenvalue, c we try to find a vector x such that:

(A - cI)x = 0

Note that for a given eigenvalue c the set of all eigenvalues is the null-space of the matrix A - cI , and so is a subspace of  n R called the eigenspace. We many times need a basis of the eigenspace.

Example 2 Find all the eigenvectors of the following matrix:

 (5 6) A = 2 1

From above we know c1 = 7,c2 = -1 . We first find x1:

( (A -) c(1I) x)1 = 0( ) -2 6 x1,1 0 2 - 6 x = 0 1,2
We choose x1 = [31], however this choice is unique only up to a constant multiple. Similarly we look for x2:
 (A - c1I)x1 = 0 ( ) ( ) ( ) 6 6 x2,1 = 0 2 2 x2,2 0
Here we choose x2 =  1 [- 1], but one could just as well choose x2 =  - 1 [ 1 ], or any other constant multiple.

A few additional facts which may be useful relating the eigenvalues and the trace of an n × n matrix A. The trace is the sum of the diagonal elements of a matrix.

detA = c1c2 ...cn
tr(A) = c1 + ...+ cn

Special forms of matrices can make finding eigenvalues trivial. In particular, the diagonal entries of diagonal matrices are the eigenvalues of the matrix. This fact also holds for upper and lower triangular matrices.

Vector and Matrix Norms

In order to compare the relative size of vectors, we introduce the notion of a norm. In the most general sense one can view the norm as describing the size of a vector, or as we will see later, a matrix. We denote the norm of the vector v by ||v||. More specifically, we say something is a norm if it satisfies the following three requirements:

  1. ||v||> 0, and ||v|| = 0 if and only if v = 0.
  2. ||kv|| = |k|||v|| for any constant k (- R .
  3. ||v + u||<||v|| + ||u||.

We note that one has multiple choices regarding norms, and at first this appears to be overkill, one finds that each norm has characteristics which make it more useful than others in certain situations. We will simply list the three most common vector norms, and the common names. We will also show how to calculate the norm for a generic x = (x1,...,xn) (- Rn

One can apply any vector norm to create a norm for matrices operating on those vectors through the following definition:

 ||Ax || ||A ||= sup ------ x/=0 ||x||

The following is an equivalent, and often more useful definition of the matrix norm:

|| A|| = max || Ax|| ||x||=1

For the norms mentioned above, the matrix norms can be calculated without the use of these definitions.

Special Matrix Forms

We now mention a few other types of Matrices which may appear later.

Additional Matrix Facts

Properties of the determinant:
  1. det(AB) = det A . det B
  2. det(AT ) = det(A)
  3. If X is an invertible matrix, then det(XAX-1) = det(A).


Problems to Work for Understanding

  1. For the following vectors decide if they are linearly dependent or independent. If they are linearly dependent write one of the vectors as a linear combination of the others.
    1. ( ) ( ) 2 1 1 , 0
    2. ( ) ( ) ( ) 1 1 0 2 , 0 , 1 3 1 1
    3. (1 ) (1 ) (0 ) 0 , 1 , 0 1 0 1
    4. ( ) ( ) 1 - 3 2 , - 6 3 - 9
    5. ( ) ( ) ( ) ( ) 1 1 0 1 2 , 0 , 1 , 1 3 1 1 1

    (Problem created by LT Grant, 2006.)

  2. Find the eigenvalues and corresponding eigenvectors for the following matrices.
    1. ( ) 2 0 0 - 3
    2. ( ) 5 - 8 1 - 1
    3. ( ) 2 - 5 5 0 3 -1 0 - 1 3

    (Problem created by LT Grant, 2006.)

  3. Let  ( 3 ) v = -21 4 . What is ||v||1 ? What is ||v||2 ? What is ||v || oo ? (Problem created by LT Grant, 2006.)
  4. Prove that the eigenvalues of an upper triangular matrix are the diagonal entries. (Hint: Consider expanding the determinant along the rows). (Problem created by LT Grant, 2006.)
  5. Let c0 be an eigenvalue of A , with corresponding eigenvector x0 .
    1. What is an eigenvalue of An ?
    2. What is an eigenvalue of A- 1 ?

    (Problem created by LT Grant, 2006.)

  6. Prove  T det(A ) = det(A) . (Hint: Use induction). (Problem created by LT Grant, 2006.)


Reading Suggestion:


Outside Readings and Links:


[an error occurred while processing this directive] [an error occurred while processing this directive]

Last modified: [an error occurred while processing this directive]