Math 314

Topics for first exam


Chapter 1: Linear systems of equations

§ 1:
Some examples
Systems of linear equations:

2x-3y-z = 6

3x+2y+z = 7

Goal: find simultaneous solutions: all x,y,z satisfying both equations.

Most general type of system:

a11x1+¼+a1nxn = b1

¼

am1x1+¼+amnxn = bm

Example: input-output models

§ 2:
Notations and a review of numbers
Set notation: AÈB, AÇB, A\B

Number systems: natural, integers, rational, reals, complex

Some complex arithmetic:

i = [Ö(-1)], pretend i behaves like a real number

complex numbers: standard form z = a+bi ; addition, subtraction, multiplication

division: complex conjugate [`z] = a-bi

[`(z+w)]=[`z]+[`w] ; [`zw]=[`z][`w]

z·[`z] = a2+b2 ( real!) ; z1/z2 = (z1·[`(z2)])/(z2·[`(z2)])

Polar coordinates:

z = a+bi (complex number) = (a,b) (point in plane) =

(r,q) (distance from origin and angle with (positive) x-axis)

z=a+bi = r(cosq+isinq)=reiq , w=c+di = s(cosf+isinf)=seif , then

zw = rs(cos(q+f)+isin(q+f)=(rs)ei(q+f). setting z = w yields

zn=rnei(nq) (DeMoivre's formula)

Think backwards; solve zn = w

Need: rn = s , cos(nq) = cosf , sin(nq) = sinf ; i.e.

r = s1/n , nq = f+2kp, i.e., q = f/n+2kp/n

So zn = w has n distinct solutions, coming from k = 0,1,¼,n-1

§ 3:
Gaussian elimination: basic ideas
3x+5y = 2

2x+3y = 1

Idea use 3x in first equation to eliminate 2x in second equation. How? Add a multiply of first equation to second. Then use y-term in new second equation to remove 5y from first!

The point: a solution to the original equations must also solve the new equations. The real point: it's much easier to figure out the solutions of the new equations!

Streamlining: keep only the essential information; throw away unneeded symbols!

Figure

We get an (augmented) matrix representing the system of equations. We carry out the same operations we used with equations, but do them to the rows of the matrix.

Three basic operations (elementary row operations):

Eij : switch ith and jth rows of the matrix

Eij(m) : add m times jth row to the ith row

Ei(m) : multiply ith row by m

Terminology: first non-zero entry of a row = leading entry; leading entry used to zero out a column = pivot.

Basic procedure (Gauss-Jordan elimination): find non-zero entry in first column, switch up to first row (E1j) (pivot in (1,1) position). Use E1(m) to make first entry a 1, then use E1j(m) operations to zero out the other entries of the first column. Then: find leftmost entry in remaining rows, switch to second row, use as a pivot to clear out the entries in the column below it. Continue (forward solving). When done, use pivots to clear out entries in column above the pivots (back-solving).

Variable in linear system corresponding to a pivot = bound variable; other variables = free variables

§ 4:
Gaussian elimination: general procedure
The big fact: After elimination, the new system of linear equations have the exact same solutions as the old system. Because: row operations are reversible!

Reverse of Eij is Eij; reverse of Eij(m) is Eij(-m); reverse of Ei(m) is Ei(1/m)

So: you can get old equations from new ones; so solution to new equations must solve old equations as well.

Reduced row form: apply elementary row operations so turn matrix A into one so that

(a) each row looks like (0 0 0 ¼0 * * ¼*); firsdt * = leading entry

(b) leading entry for row below is further to the right

Reduced row echelon form: in addition, have

(c) each leading entry is = 1

(d) each leading entry is the only non-zero number in its column.

RRF can be achieved by forward solving; RREF by back-solving and Ei(m) 's

Elimination: every matrix can be put into RREF by elementary row operations.

Big Fact: If a matrix A is put into RREF by two different sets of row operations, you get the same matrix.

RREF of an augmented matrix: can read off solutions to linear system.

Figure

Inconsistent systems: row of zeros in coefficient matrix, followed by a non-zero number (e.g., 2). Translates as 0=2 ! System has no solutions.

Rank of a matrix = r(A) = number of non-zero rows in RREF = number of pivots in RREF.

Nullity of a matrix = n(A) = number of columns without a pivot = # columns - # pivots

rank = number of bound variables, nullity = number of free variables

rank £ number of rows, number of columns (at most one pivot per row/column!)

rank + nullity = number of columns = number of variables

A = coefficient matrix, [A\tilde] = augmented matrix (A = m×n matrix)

system is consistent if and only if r(A) = r([A\tilde])

r(A)=n : unique solution ; r(A) < n : infinitely many solutions


Chapter 2: Matrix algebra


§ 1:
Matrix addition and scalar multiplication
Idea: take our ideas from vectors. Add entry by entry. Constant multiple of matrix: multiply entry by entry.

0 = matrix all of whose entries are 0

Basic facts:

A+B makes sense only if A and B are the same size (m×n) matrix

A+B = B+A

(A+B)+C = A+(B+C)

A+0 = A

A+(-1)A = 0

cA has the same size as A

c(dA) = (cd)A

(c+d)A = cA + dA

c(A+B) = cA + cB

1A = A

§ 2:
Matrix multiplication
Idea: don't multiply entry by entry! We want matrix multplication to allow us to write a system of linear equations as Ax=b ....

Basic step: a row of A, times x, equals an entry of Ax. (row vector (a1,¼,an) times column vector (x1,¼,xn) is a1x1+¼+anxn ....) Thisa leads to:

In AB, each row of A is `multiplied' by each column of B to obtain an entry of AB. Need: the length of the rows of A (= number of columns of A) = length of columns of B (= number of rows of B). I.e, in order to multiply, A must be m×n, and B must be n×k; AB is then m×k.

Formula: (i,j)th entry of AB is Sk = 1n aikbkj

I = identity matrix; square matrix (n×n) with 1's on diagonal, 0's off diagonal

Basic facts:

AI = A = IA

(AB)C = A(BC)

c(AB) = (cA)B = A(cB)

(A+B)C = AC + BC

A(B+C) = AB + AC

In general, however it is **not** **not** true that AB and BA are the same; they are almost always different! ****

§ 3:
Applications of matrix arithmetic
Ax=b ; A m-byn matrix. Think: x=vector=variable (size n) , Ax = vector = image of x (size m)

i.e., A takes vectors in Rn and spits out vectors in Rm; it's a function (which we call TA) from Rn to Rm. More than that, it's a linear function:

TA(ax+by = aTA(x)+bTA(y)

With this new notation, matrix multiplication becomes composition of functions.

What do we do with matrix multiplication? Solve equations!

Ax=b ; basic idea, try to find a matrix B with BA=I, so then x = Ix = (BA)x = B(Ax) = Bb solves the equation. (How to find B? Wait.....)

Another application: Markov chains

Idea: in any give month, a fixed percentage people using one product switch to another.

a1 = .3a0+.4b0+.2c0 , b1 = .4a0+.5b0+.6c0 , c1 = .3a0+.1b0+.2c0

New distribution, given initial distribution x, is Ax, where

Figure More generally, a Markov chain consists of an (initial) probability distribution vector (entries are ³ 0 and add up to 1) and a transition matrix A (entries are ³ 0 and each column adds up to 1). The distribution evolves by multiplication by A. E.g, after 20 iterations, initial vector x evolves into A20x.

§ 4:
Special matrices and transposes
Elementary matrices:

A row operation (Eij , Eij(m) , Ei(m)) applied to a matrix A corresponds to multiplication (on the left) by a matrix (also denoted Eij , Eij(m) , Ei(m)) The matrices are obtained by applying the row operation to the identity matrix In. E.g., the 4×4 matrix E13(-2) looks like I, except it has a -2 in the (1,3)th entry.

The idea: if A ® B by the elementary row operation E, then B = EA.

So if A ® B ® C by elementary row operations, then C = E2E1A ....

Row reduction is matrix multiplication!

A scalar matrix A has the same number c in the diagonal entries, and 0's everywhere else (the idea: AB = cB)

A diagonal matrix has all entries zero off of the (main) diangonal

A upper triangular matrix has entries =0 below the diagonal, a lower triangular matrix is 0 above the diagonal. A triangular matrix is either upper or lower triangular.

A strictly triangular matrix is triangular, and has zeros on the diagonal, as well. They come in upper and lower flavors.

The transpose of a matrix A is the matrix AT whose columns are the rows of A (and vice versa). AT is A reflected across the main diagonal. (aij)T = (aji) ; (m×n)T = (n×m)

Basic facts:

(A+B)T = AT+BT

(AB)T = BTAT

(cA)T = cAT

(AT)T = A

Transpose of an elementary matrix is elementary:

EijT = Eij , Eij(m)T = Eji(m) , Ei(m)T = Ei(m)

A matrix A is symmetric if AT = A

An occasionally useful fact: AE, where E is an elementary matrix, is the result of an elementary column operation on A .

The transpose and rank:

For any pair of compatible matrices, r(AB) £ r(A)

Consequences: r(AT) = r(A) for any matrix A; r(AB) £ r(B), as well.

§ 5:
Matrix inverses
One way to solve Ax=b : find a matrix B with BA=I . When is there such a matrix?

(Think about square matrices...) A an n-by-n matrix ; n=r(I)=r(BA) £ r(A) £ n implies that r(A)=n . This is necessary, and it is also sufficient!

r(A)=n, then the RREF of A has n pivots in n rows and columns, so has a pivot in every row, so the RREF of A is I. But! this means we can get to I from A by row operations, which correspond to multiplication by elementary matrices. *So* multiply A (on the left) by the right elementary matrices and you get I; call the product of those matrices B and you get BA=I !

It turns out (by using the transpose) that AB=I as well!

A matrix B is an inverse of A if AB=I and BA=I; it turns out, the inverse of a matrix is always unique. We call it A-1 (and call A invertible).

Finding A-1 : row reduction! (of course...)

Build the "super-augmented" matrix (A|I) (the matrix A with the identity matrix next to it). Row reduce A, and carry out the operations on the entire row of the S-A matrix (i.e., carry out the identical row operations on I). Wnem done, if invertible+ the left-hand side of the S-A matrix will be I; the right-hand side will be A-1 !

I.e., if (A|I) ® (I|B) by row operations, then I=BA .

Basic facts:

(A-1)-1 = A

if A and B are invertible, then so is AB, and (AB)-1 - B-1A-1

(cA)-1 = (1/c)A-1

(AT)-1 = (A-1)T

If A is invertible, and AB = AC, then B = C; if BA = CA, then B = C.

Inverses of elementary matrices:

Eij-1 = Eij , Eij(m)-1 = Eij(-m) , Ei(m)-1 = Ei(1/m)

Highly useful formula: for a 2-by-2 matrix,

Figure (Note: need D=ad-bc ¹ 0 for this to work....)

Some conditions for/consequences of invertibility: the following are all equivalent (A = n-by-n matrix).

1. A is invertible,

2. r(A) = n.

3. The RREF of A is In.

4. Every linear system Ax=b has a unique solution.

5. For one choice of b, Ax=b has a unique solution (i.e., if one does, they all do...).

6. The equation Ax=0 has only the solution x=0.

7. There is a matrix B with BA=I.

The euivalence of 4. and 6. is sometimes stated as Fredholm's alternative: Either every equation Ax=b has a unique solution, or the equation Ax=0 has a non-trivial solution (and only one of tyhe alternatives can occur).


File translated from TEX by TTH, version 0.9.