Chapter 1: Linear systems of equations
Goal: find simultaneous solutions: all x,y,z satisfying both equations.
Most general type of system:
Example: input-output models
Number systems: natural, integers, rational, reals, complex
Some complex arithmetic:
i = [Ö(-1)], pretend i behaves like a real number
complex numbers: standard form z = a+bi ; addition, subtraction, multiplication
division: complex conjugate [`z] = a-bi
[`(z+w)]=[`z]+[`w] ; [`zw]=[`z][`w]
z·[`z] = a^{2}+b^{2} ( real!) ; z_{1}/z_{2} = (z_{1}·[`(z_{2})])/(z_{2}·[`(z_{2})])
Polar coordinates:
z = a+bi (complex number) = (a,b) (point in plane) =
(r,q) (distance from origin and angle with (positive) x-axis)
z=a+bi = r(cosq+isinq)=re^{iq} , w=c+di = s(cosf+isinf)=se^{if} , then
zw = rs(cos(q+f)+isin(q+f)=(rs)e^{i(q+f)}. setting z = w yields
z^{n}=r^{n}e^{i(nq)} (DeMoivre's formula)
Think backwards; solve z^{n} = w
Need: r^{n} = s , cos(nq) = cosf , sin(nq) = sinf ; i.e.
r = s^{1/n} , nq = f+2kp, i.e., q = f/n+2kp/n
So z^{n} = w has n distinct solutions, coming from k = 0,1,¼,n-1
Idea use 3x in first equation to eliminate 2x in second equation. How? Add a multiply of first equation to second. Then use y-term in new second equation to remove 5y from first!
The point: a solution to the original equations must also solve the new equations. The real point: it's much easier to figure out the solutions of the new equations!
Streamlining: keep only the essential information; throw away unneeded symbols!
We get an (augmented) matrix representing the system of equations. We carry out the same operations we used with equations, but do them to the rows of the matrix.
Three basic operations (elementary row operations):
E_{ij} : switch ith and jth rows of the matrix
E_{ij}(m) : add m times jth row to the ith row
E_{i}(m) : multiply ith row by m
Terminology: first non-zero entry of a row = leading entry; leading entry used to zero out a column = pivot.
Basic procedure (Gauss-Jordan elimination): find non-zero entry in first column, switch up to first row (E_{1j}) (pivot in (1,1) position). Use E_{1}(m) to make first entry a 1, then use E_{1j}(m) operations to zero out the other entries of the first column. Then: find leftmost entry in remaining rows, switch to second row, use as a pivot to clear out the entries in the column below it. Continue (forward solving). When done, use pivots to clear out entries in column above the pivots (back-solving).
Variable in linear system corresponding to a pivot = bound variable; other variables = free variables
Reverse of E_{ij} is E_{ij}; reverse of E_{ij}(m) is E_{ij}(-m); reverse of E_{i}(m) is E_{i}(1/m)
So: you can get old equations from new ones; so solution to new equations must solve old equations as well.
Reduced row form: apply elementary row operations so turn matrix A into one so that
(a) each row looks like (0 0 0 ¼0 * * ¼*); firsdt * = leading entry
(b) leading entry for row below is further to the right
Reduced row echelon form: in addition, have
(c) each leading entry is = 1
(d) each leading entry is the only non-zero number in its column.
RRF can be achieved by forward solving; RREF by back-solving and E_{i}(m) 's
Elimination: every matrix can be put into RREF by elementary row operations.
Big Fact: If a matrix A is put into RREF by two different sets of row operations, you get the same matrix.
RREF of an augmented matrix: can read off solutions to linear system.
Inconsistent systems: row of zeros in coefficient matrix, followed by a non-zero number (e.g., 2). Translates as 0=2 ! System has no solutions.
Rank of a matrix = r(A) = number of non-zero rows in RREF = number of pivots in RREF.
Nullity of a matrix = n(A) = number of columns without a pivot = # columns - # pivots
rank = number of bound variables, nullity = number of free variables
rank £ number of rows, number of columns (at most one pivot per row/column!)
rank + nullity = number of columns = number of variables
A = coefficient matrix, [A\tilde] = augmented matrix (A = m×n matrix)
system is consistent if and only if r(A) = r([A\tilde])
r(A)=n : unique solution ; r(A) < n : infinitely many solutions
Chapter 2: Matrix algebra
0 = matrix all of whose entries are 0
Basic facts:
A+B makes sense only if A and B are the same size (m×n) matrix
A+B = B+A
(A+B)+C = A+(B+C)
A+0 = A
A+(-1)A = 0
cA has the same size as A
c(dA) = (cd)A
(c+d)A = cA + dA
c(A+B) = cA + cB
1A = A
Basic step: a row of A, times x, equals an entry of Ax. (row vector (a_{1},¼,a_{n}) times column vector (x_{1},¼,x_{n}) is a_{1}x_{1}+¼+a_{n}x_{n} ....) Thisa leads to:
In AB, each row of A is `multiplied' by each column of B to obtain an entry of AB. Need: the length of the rows of A (= number of columns of A) = length of columns of B (= number of rows of B). I.e, in order to multiply, A must be m×n, and B must be n×k; AB is then m×k.
Formula: (i,j)th entry of AB is S_{k = 1}^{n} a_{ik}b_{kj}
I = identity matrix; square matrix (n×n) with 1's on diagonal, 0's off diagonal
Basic facts:
AI = A = IA
(AB)C = A(BC)
c(AB) = (cA)B = A(cB)
(A+B)C = AC + BC
A(B+C) = AB + AC
In general, however it is **not** **not** true that AB and BA are the same; they are almost always different! ****
i.e., A takes vectors in R^{n} and spits out vectors in R^{m}; it's a function (which we call T_{A}) from R^{n} to R^{m}. More than that, it's a linear function:
T_{A}(ax+by = aT_{A}(x)+bT_{A}(y)
With this new notation, matrix multiplication becomes composition of functions.
What do we do with matrix multiplication? Solve equations!
Ax=b ; basic idea, try to find a matrix B with BA=I, so then x = Ix = (BA)x = B(Ax) = Bb solves the equation. (How to find B? Wait.....)
Another application: Markov chains
Idea: in any give month, a fixed percentage people using one product switch to another.
New distribution, given initial distribution x, is Ax, where
Figure More generally, a Markov chain consists of an (initial) probability distribution vector (entries are ³ 0 and add up to 1) and a transition matrix A (entries are ³ 0 and each column adds up to 1). The distribution evolves by multiplication by A. E.g, after 20 iterations, initial vector x evolves into A^{20}x.
A row operation (E_{ij} , E_{ij}(m) , E_{i}(m)) applied to a matrix A corresponds to multiplication (on the left) by a matrix (also denoted E_{ij} , E_{ij}(m) , E_{i}(m)) The matrices are obtained by applying the row operation to the identity matrix I_{n}. E.g., the 4×4 matrix E_{13}(-2) looks like I, except it has a -2 in the (1,3)th entry.
The idea: if A ® B by the elementary row operation E, then B = EA.
So if A ® B ® C by elementary row operations, then C = E_{2}E_{1}A ....
Row reduction is matrix multiplication!
A scalar matrix A has the same number c in the diagonal entries, and 0's everywhere else (the idea: AB = cB)
A diagonal matrix has all entries zero off of the (main) diangonal
A upper triangular matrix has entries =0 below the diagonal, a lower triangular matrix is 0 above the diagonal. A triangular matrix is either upper or lower triangular.
A strictly triangular matrix is triangular, and has zeros on the diagonal, as well. They come in upper and lower flavors.
The transpose of a matrix A is the matrix A^{T} whose columns are the rows of A (and vice versa). A^{T} is A reflected across the main diagonal. (aij)^{T} = (aji) ; (m×n)^{T} = (n×m)
Basic facts:
(A+B)^{T} = A^{T}+B^{T}
(AB)^{T} = B^{T}A^{T}
(cA)^{T} = cA^{T}
(A^{T})^{T} = A
Transpose of an elementary matrix is elementary:
E_{ij}^{T} = E_{ij} , E_{ij}(m)^{T} = E_{ji}(m) , E_{i}(m)^{T} = E_{i}(m)
A matrix A is symmetric if A^{T} = A
An occasionally useful fact: AE, where E is an elementary matrix, is the result of an elementary column operation on A .
The transpose and rank:
For any pair of compatible matrices, r(AB) £ r(A)
Consequences: r(A^{T}) = r(A) for any matrix A; r(AB) £ r(B), as well.
(Think about square matrices...) A an n-by-n matrix ; n=r(I)=r(BA) £ r(A) £ n implies that r(A)=n . This is necessary, and it is also sufficient!
r(A)=n, then the RREF of A has n pivots in n rows and columns, so has a pivot in every row, so the RREF of A is I. But! this means we can get to I from A by row operations, which correspond to multiplication by elementary matrices. *So* multiply A (on the left) by the right elementary matrices and you get I; call the product of those matrices B and you get BA=I !
It turns out (by using the transpose) that AB=I as well!
A matrix B is an inverse of A if AB=I and BA=I; it turns out, the inverse of a matrix is always unique. We call it A^{-1} (and call A invertible).
Finding A^{-1} : row reduction! (of course...)
Build the "super-augmented" matrix (A|I) (the matrix A with the identity matrix next to it). Row reduce A, and carry out the operations on the entire row of the S-A matrix (i.e., carry out the identical row operations on I). Wnem done, if invertible+ the left-hand side of the S-A matrix will be I; the right-hand side will be A^{-1} !
I.e., if (A|I) ® (I|B) by row operations, then I=BA .
Basic facts:
(A^{-1})^{-1} = A
if A and B are invertible, then so is AB, and (AB)^{-1} - B^{-1}A^{-1}
(cA)^{-1} = (1/c)A^{-1}
(A^{T})^{-1} = (A^{-1})^{T}
If A is invertible, and AB = AC, then B = C; if BA = CA, then B = C.
Inverses of elementary matrices:
E_{ij}^{-1} = E_{ij} , E_{ij}(m)^{-1} = E_{ij}(-m) , E_{i}(m)^{-1} = E_{i}(1/m)
Highly useful formula: for a 2-by-2 matrix,
Figure (Note: need D=ad-bc ¹ 0 for this to work....)
Some conditions for/consequences of invertibility: the following are all equivalent (A = n-by-n matrix).
1. A is invertible,
2. r(A) = n.
3. The RREF of A is I_{n}.
4. Every linear system Ax=b has a unique solution.
5. For one choice of b, Ax=b has a unique solution (i.e., if one does, they all do...).
6. The equation Ax=0 has only the solution x=0.
7. There is a matrix B with BA=I.
The euivalence of 4. and 6. is sometimes stated as Fredholm's alternative: Either every equation Ax=b has a unique solution, or the equation Ax=0 has a non-trivial solution (and only one of tyhe alternatives can occur).