Don't forget the handy facts from the first exam!

**Fermat's Little Theorem.** If p is prime and (a,p) = 1, then a^{p-1} \medspace \underset p® º \medspace1

Because: (a·1)(a·2)(a·3)¼(a·(p-1)) \medspace \underset p® º \medspace 1·2·3¼(p-1) , and (1·2·3¼(p-1),p) = 1 . Same idea, looking at the a's between 1 and n-1 that are relatively prime to n (and letting f(n) be the number of them), gives

If (a,n) = 1, then a^{f(n)} \medspace \underset n® º \medspace1 .

f(n) = n-1
__only__ when n is prime. Numbers n which are *not* prime but for which a^{n-1} \medspace \underset n® º \medspace1
are called *a-pseudoprimes*; they are very uncommon!

One approach to calculating a^{k} (mod n) quickly is to start with a, and repeatedly square
the result (mod n), computing a^{1},a^{2},a^{4},a^{8},a^{16}. etc. , continuing until the resulting exponent is more than
half of k . a^{k} is then the product of some subset of our list - we essentially use the powers whose
exponents are part of the base 2 expansion of k.

**Rings.** Basic idea: find out what makes our calculations in Z_{n} work.

A ring is a set R together with two operations +,· (which we call addition and multiplication) satisfying:

For any r,s,t Î R,

(0) r+s,r·s Î R [closure]

(1) (r+s)+t = r+(s+t) [associativity of addition]

(2) r+s = s+r [commutativity of addition]

(3) there is a 0_{R} Î R with r+0_{R} = r [additive identity]

(4) there is a -r Î R with r+(-r) = 0_{R} [additive inverse]

(5) (r·s)·t = r·(s·t) [associativity of multiplication]

(6) there is a 1_{R} Î R with r·1_{R} = 1_{R}·r = r [mutliplicative identity]

(7) r·(s+t) = r·s + r·t and (r+s)·t = r·t + s·t [distributivity]

These are the most basic properties of the integers mod n that we used repeatedly. Some others acquire special names:

A ring (R,+,·) satisfying: for every r,s Î R, r·s = s·r is called *commutative*.

A commutative ring R satisfying if rs = 0_{R}, then r = 0_{R} or s = 0_{R} is called an *integral domain*.

A ring R satisfying if r ¹ 0_{R}, then r·s = s·r = 1_{R} for some s Î R is called a *division ring*.

A commutative division ring is called a *field*.

An element r Î R satisfying r ¹ 0_{R} and r·s = 0_{R} for some b ¹ 0_{R} is called a *zero divisor*.

An element r Î R satisfying rs = sr = 1_{R} for some s Î R is called a *unit*.

An *idempotent* is an element r Î R satifying r^{2} = r .

A *nilpotent* is an element r Î R satisfying r^{k} = 0_{R} for some k ³ 1 .

**Examples:** The integers Z, the integers mod n Z_{n}, the real numbers R, the complex numbers
C;

If R is a ring, then the set of all polynomials with coefficients in R, denoted R[x], is a ring, where you add and multiply as you do with ``ordinary'' polynomials:

R[x] = {å_{i = 0}^{n} r_{r}x^{i} : r_{i} Î R} and (filling in with 0_{R}'s as needed)

å_{i = 0}^{n} r_{r}x^{i} + å_{i = 0}^{m} s_{r}x^{i} = å_{i = 0}^{n} (r_{r}+s_{i})x^{i} and
å_{i = 0}^{n} r_{r}x^{i} · å_{j = 0}^{m} s_{r}x^{j} =
å_{k = 0}^{n+m} (å_{i+j = k}r_{i}·s_{j})x^{k}

If R is a ring and n Î N, then the set M_{n}(R) of n×n matrices with entries in R is a ring, with entry-wise addition and
`matrix' multiplication:

the (i,j)-th entry of (r_{ij})·(s_{ij}) is å_{k = 0}^{n} r_{ik}·s_{kj}

If R is commutative, then so is R[x] ; if R is an integral domain, then so is R[x]. If n ³ 2, then M_{n}(R) is not commutative.

A *subring* is a subset S Í R which, using the same addition and multiplication as in R, is also
a ring.

To show that S is a subring of R, we need:

(1) if s,s^{¢} Î S, then s+s^{¢}, s·s^{¢} Î S

(2) if s Î S, then -s Î S

(3) there is something that acts like a 1 in S (this need not
__be__ 1_{R} ! But 1_{R} Î S is enough...)

The Cartesian product of two rings R,S is the set R×S = {(r,s) : r Î R, s Î S} . It is a ring, using
coordinatewise addition and multiplication: (r,s)+(r^{¢},s^{¢}) = (r+r^{¢},s+s^{¢}) ,
(r,s)·(r^{¢},s^{¢}) = (r·r^{¢},s·s^{¢})

**Some basic facts:**

A ring has only one ``zero'': if x+r = r for some R, then x = 0_{R}

A ring has only one ``one'': if xr = r for every r, then x = 1_{R}

Every r Î R has only one additive inverse: if r+x = 0_{R}, then x = -r

-(-r) = r ; 0_{R}·r = r·0_{R} = 0_{R} ; (-1_{R})·r = r·(-1_{R}) = -r

Every finite integral domain is a field; this is because, for any a ¹ 0_{R}, the function m_{a}:R® R
given by m_{a}(r) = ar is one-to-one, and so by the Pigeonhole Principle is also onto; meaning ar = 1_{R} for some r Î R .

If R is finite, then every r Î R, r ¹ 0_{R}, is either a zero-divisor
__or__ a unit (and can't be both!). Idea:
The first time the sequence 1,r,r^{2},r^{3},¼ repeats, we either have r^{n} = 1 = r(r^{n-1}) or r^{n} = r^{n+k}, so
r(r^{n+k-1}-r^{n-1}) = 0 .

A unit in R×S consists of a pair (r,s) where each of r,s is a unit. (The same is true for idempotents and nilpotents.)

For n Î N and r Î R, we define n·x = x+¼+x (add x to itself n times) and
x^{n} = x·¼·x (multiply x by itself n times). And we define (-n)·x = (-x)+¼+(-x) .
Then we have

(n+m)·r = n·r + m·r, (nm)·r = n·(m·r), r^{m+n} = r^{m}·r^{n}, r^{mn} = (r^{m})^{n}

**Homomorphisms and isomorphisms**

A *homomorphism* is a function f:R® S from a ring R to a ring S satisfying:

for any r,r^{¢} Î R , f(r+r^{¢}) = f(r) + f(r^{¢}) and f(r·r^{¢}) = f(r) ·f(r^{¢}) .

The basic idea is that it is a function that ``behaves well'' with respect to addition and multiplication.

An *isomorphism* is a homomorphism that is both one-to-one and onto. If there is an isomorphism from R to S, we say
that R and S are *isomorphic*, and write R @ S . The basic idea is that isomorphic rings are ``really the same'';
if we think of the function
f as a way of identifying the elements of R with the elements of S, then the two notions of addition and
multiplication on the two rings are
__identical__. For example, the ring of complex numbers C is isomorphic to a
ring whose elements are the Cartesian product R×R,
__provided__ we use the multiplication
(a,c)·(c,d) = (ac-bd,ad+bc) . And the main point is that anything that is true of R (which depends only on its
properties as a ring) is also true of anything isomorphic to R, e.g., if r Î R is a unit, and f is an
isomorphism, then f(r) is also a unit.

The phrase ``is ismorphic to'' is an equivalence relation: the composition of two isomorphisms is an isomorphism, and the inverse of an isomorphism is an isomorphism.

A more useful example: if (m,n) = 1, then Z_{mn} @ Z_{m}×Z_{n} . The isomorphism is given by

f([x]_{mn}) = ([x]_{m},[x]_{n})

The main ingredients in the proof:

If f: R® S and y: R® T are ring homomorphisms, then the function w: R ® S×T given by w(r) = (f(r),y(r)) is also a homomorphism.

If m|n, then the function f: Z_{n}® Z_{m} given by f([x]_{n}) = [x]_{m} is a homomorphism.

Together, these give that the function we want above is a homomorphism. The fact that (m,n) = 1 implies that f is one-to-one; then the Pigeonhole Principle implies that it is also onto!

The above isomorphism and induction imply that if n_{1},¼n_{k} are *pairwise relatively prime* (i.e.,
if i ¹ j then (n_{i},n_{j}) = 1), then

Z_{n1¼nk} @ Z_{n1}×¼×Z_{nk} . This implies:

**The Chinese Remainder Theorem:** If n_{1},¼n_{k} are pairwise relatively prime, then for any a_{1},¼a_{k} Î N
the system of equations

x º a_{i} (mod n_{i}), i = 1,¼k

has a solution, and any two solutions are congruent modulo n_{1}¼n_{k} .

A solution can be found by (inductively) replacing a pair of equations x º a (mod n) , x º b (mod m), with a single equation x º c (mod nm), by solving the equation a+nk = x = b+mj for k and j, using the Euclidean Algorithm.

**Application to units and the Euler f-function:**

If R is a ring, we denote the units in R by R^{*} . E.g., Z_{n}^{*} = {[x]_{n} ; (x,n) = 1}. From a fact above, we
have (R×S)^{*} = R^{*}×S^{*} .

f(n) = the number of units in Z_{n} = |Z_{n}^{*}|; then the CRT implies that if (m,n) = 1, then f(mn) = f(m)f(m) .
Induction and the fact that if p is prime f(p^{k}) = p^{k-1}(p-1) = p^{k}-(the number of multiples of p) implies

If the prime factorization of n is p_{1}^{a1}¼p_{k}^{ak}, then f(n) = [p_{1}^{a1-1}(p_{1}-1)]¼[p_{k}^{ak-1}(p_{k}-1)]

**Groups:** Three important properties of the set R^{*} of units of a ring R:

1_{R} Î R^{*}

if x,y Î R^{*}, then xy Î R^{*}

if x Î R^{*} then (x, by definition, has a multiplicative inverse x^{-1} and) x^{-1} Î R^{*}

Together, these three properties (together with associativity of multiplication) describe what is called a *group*.

A *group* is a set G together with a single operation (denoted *) satisfying:

For any g,h,k Î G,

(0) g*h Î G [closure]

(1) g*(h*k) = (g*h)*k [associativity]

(2) there is a 1_{G} Î G satisfying 1_{G}*g = g*1_{G} = g [identity]

(3) there is a g^{-1} Î G satisfying g^{-1}*g = g*g^{-1} = 1_{G} [inverses]

A group (G.*) which, in addition, satisfies g*h = h*g for every g,h Î G is called *abelian*.
Since this is something we always expect out of addition, if we know that a group is abelian, we often write the group
operation as ``+'' to help remind ourselves that the operation commutes.

**Examples:** Any ring (R,+,·), if we just forget about the multiplication, is an (abelian) group (R,+) .

For any ring R, the set of units (R^{*},·) is a group using the multiplication from the ring. [[Unsolved
(I think!) question: is every group the group of units for some ring R?]]

Function composition is always associative, so one way to build many groups is to think of the
elements as functions. But to have an inverse under function composition, a function needs to be
both one-to-one
__and__ onto. [One-to-one is sometimes also called *injective*, and onto is called *surjective*;
a function that is both injective and surjective is called *bijective*.] So if, for any set S, we set

G = P(S) = {f:S® S : f is one-to-one and onto} ,

then G is a group under function composition; it is called the group of *permutations* of S. If S is the
finite set {1,2,¼,n}, then we denote the group by S_{n}, the *symmetric group on n letters*. By
counting the number of bijections from a set with n elements to itself, we find that S_{n} has n! elements.

The set of rigid motions of the plane, that is, the functions f:R^{2}® R^{2} satisfying
dist(f(x),f(y)) = dist(x,y) for every x,y Î R^{2}, is a group under function composition, since
the composition or inverse of rigid motions are rigid motions. More generally, for any geometric object T
(like a triangle, or square, or regular pentagon, or...), the set of rigid motions f which take T to itself
form a group, the group Symm(T) of *symmetries* of T. For example, for T = a square, Symm(T)
consists of the identity, three rotations about the center of the square (with rotation angles p/2,p,, and 3p/2),
and four reflections (through two lines which go through opposite corners of the square, and two lines which go through
the centers of opposite sides).

G = Aff(R) = {f(x) = ax+b : a ¹ 0} , the set of linear, non-constant functions from R to R, form a
group under function composition, since the composition of two linear functions is linear, and the inverse
of a linear function is linear. It is called the *affine group of* R. This is an example of a *subgroup* of P(R):

A *subgroup* H of G is a subset H Í G which, using the same group operation as G, is
a group in its own right. As with subrings, this basically means that:

(1) If h,k Î H, then h*k Î H

(2) If h Î H, then h^{-1} Î H

(3) 1_{G} Î H .

Condition (3) really need not be checked (so long as H ¹ Æ), since, for any
h Î H, (2) guarantees that h^{-1} Î H, and so (1) implies that h*h^{-1} = 1_{G} Î H .

For example, for the symmetries of a polygon T in the plane, since a symmetry must take the
corners of T (called its *vertices*) to the corners, each symmetry can be thought of as a permutation
of the vertices. So Symm(T) can be thought of (this can be made precise, using the notion of
isomorphism below) as a subgroup of the group of symmetries of the set of vertices of T.

As with rings, some basic facts about groups are true:

There is only one ``one'' in a group; if x Î G satisfies x*y = y for some y Î G, then x = 1_{G}

Every g Î G has only one inverse: if g*h = 1_{G}, then h = g^{-1}

(g^{-1})^{-1} = g for every g Î G

(gh)^{-1} = h^{-1}g^{-1}

**Homomorphisms and isomorphisms:** Just as with rings, again, we have the notion of functions
between groups which ``respect'' the group operations:

A *homomorphism* is a function f:G® H from groups G to H which satisfies:

for every g_{1},g_{2} Î G, f(g_{1}*g_{2}) = f(g_{1})*f(g_{2})

No other condition is required, since this
__implies__ that

f(1_{G}) = 1_{H} ,as well as f(g^{-1}) = (f(g))^{-1} .

An *isomorphism* is a homomorphism that is also one-to-one and onto. If there is an isomorphism
from G to H, we say that G and H are *isomorphic*. As with rings, the idea is that ismorphic
groups are really the ``same''; the function is a way of identifying elements so that the two groups
are identical (as groups!). For example, the group Aff(R) can be thought of as R×R
(i.e., the pair of coefficients of the linear function), but with the group multiplication given by (by working out
what the coeffiicients of the composition are!) (a,b)*(c,d) = (ac,ad+b) .

File translated from T