% Use % to comment


%  Save this file as plain text file with extension filename.tex and then run it with LaTeX.


\documentclass[twocolumn]{article} % Delete `twocolumn' to have one column text






%\pagestyle{plain} \addtolength{\topmargin}{-.8in}



%\addtolength{\oddsidemargin}{-1in} \addtolength{\textwidth}{2in}

\markright{Solution Keys\hfill Page }











%\baselineskip 16pt

\n {\bf \textcolor{red}{Note:}} \textcolor{blue}{The homework you

turn in must contain each problem statement in its entirety and

followed by its solution, as demonstrated in the first problem

below. Others below are sketches, outlines, or hints for



\n {\bf \textcolor{red}{[\#1.3]}} Prove

$1^3+2^3+\cdots+n^3=(1+2+\cdots+n)^2$ for all natural numbers $n$.


\noindent Proof: Notice first that

$1+2+\cdots+n=\frac{n(n+1)}{2}$. So we only need to show

$1^3+2^3+\cdots+n^3=\frac{n^2(n+1)^2}{4}$. By induction, we have

$1^3=\frac{1^2(1+1)^2}{4}$ for $n=1$. So the identity holds for

$n=1$. Assume it holds for $n$. Now consider the case for $n+1$.

By the induction assumption, we have


=(n+1)^2\frac{n^2+4n+4}{4}=\frac{(n+1)^2(n+2)^2}{4}$, which is the

case for $n+1$. This completes the proof. $\square$


\n {\bf \textcolor{red}{[\#1.4}} (a) Let

$S_n=1+3+5+\cdots+(2n-1)$. Then $S_1=1,S_2=4,S_3=9$, suggesting

$S_n=n^2$. (b) Assume $S_n=n^2$. Then

$S_{(n+1)}=S_n+2(n+1)-1=n^2+2n+1$ by the assumption and

simplification, which equals $(n+1)^2$ by complete squaring.

Hence, by induction we have shown that $S_n=n^2$ for all

$n\in{\Bbb N}$.


\n {\bf \textcolor{red}{[\#1.6 *]}} Modelled after Example 2 page



\n {\bf \textcolor{red}{[\#1.12*]}} (a) Notice first that  for all

$n\in{\Bbb N}$,


0\end{array}\right)=\frac{n!}{0!(n-0)!}=1, \left(\begin{array}{c}n\\


$n!=n(n-1)\cdots 2\cdot1=n(n-1)!=n(n-1)(n-2)!$, $\left(\begin{array}{c}n\\

1\end{array}\right)=\frac{n!}{1!(n-1)!}=n, \left(\begin{array}{c}n\\

2\end{array}\right)=\frac{n!}{2!(n-2)!}=\frac{n(n-1)}{2}$. With

those identities, you can check the cases for $n=1,2,3$ directly.

Also, notice that there are $n+1$ terms in the binomial expansion

$(a+b)^n$. (b) $\left(\begin{array}{c}n\\



Because $k!=k(k-1)!, (n-(k-1))!=(n+1-k)!=(n+1-k)(n-k)!$, the

common denominator is $k(k-1)!(n+1-k)(n-k)!=k!(n+1-k)!$. Simplify

the addition then as follows: $\left(\begin{array}{c}n\\




k\end{array}\right)$. (c) Use induction. More precisely, the

expansion for $n=1$ trivially. Assume the expansion for $n$. Then

consider the case for $n+1$:




n\end{array}\right)b^n\right](a+b)$, by the assumption. Expand

further, collect like terms $a^{n+1}, a^nb,\dots, a^kb^{n-k},

b^{n+1}$. You will find with exceptions for the 1st and last

terms, the coefficient for $a^kb^{(n-k)}$ is $\left(\begin{array}{c}n\\



k\end{array}\right)$ by (b) for $k=1,2,\dots, n$. For $a^{n+1},

b^{n+1}$, their

coefficients remain to be $\left(\begin{array}{c}n\\


0\end{array}\right)=1, \left(\begin{array}{c}n\\






\n {\bf \textcolor{red}{[\#2.2, 2.4, 2.5]}} They are all similar.

Follow Examples 2--6 of \S 2.


\n {\bf \textcolor{red}{[\#3.3]}}



(-a)(-b) &=(-a)(-b)+0\  (\hbox{by A3})\\

&=(-a)(-b)+(ab+(-ab))\ (\hbox{A4})\\

&=[(-a)(-b)+(-ab)]+ab\ (\hbox{A2, A1})\\

&=[(-a)(-b)+(-a)b)]+ab\ (\hbox{Thm 3.1(iii)})\\

&=(-a)[(-b)+b)]+ab\ (\hbox{DL})\\

&=(-a)0+ab=ab\ (\hbox{A4, A3})




\n {\bf \textcolor{red}{[\#3.5]}} (a) First it is obvious that

$-|b|\le b\le |b|$ because either $|b|=b$ or $|b|=-b$ which

implies for the former case that $|b|=b\ge 0\ge -|b|$ and that

$-|b|=b\le 0\le |b|$ for the latter case. Therefore, together with

$|b|\le a$ it implies $-a\le -|b|\le b\le |b|\le a$ as required.

Conversely, if $-a\le b\le a$, then we must have $|b|=b\le a$ if

$b\ge 0$ using the right part of the inequality $b\le a$, or

$|b|=-b\le a$ if $b\le 0$ using the left part of the inequality

$-a\le b\Rightarrow -b\le a$. (b) By (a) we only need to show

$-|a-b|\le |a|-|b|\le |a-b|$. For the right part, we have

$|a|=|a-b+b|\le |a-b|+|b|$ by the triangle inequality. Hence,

$|a|-|b|\le |a-b|$. This is true for all $a,b$. Exchanging $a, b$,

we have $|b|-|a|\le |b-a|=|-(b-a)|=|a-b|$ which is the same as

$-|a-b|\le |a|-|b|$, showing the left part of the inequality.


\n {\bf \textcolor{red}{[\#3.6]}} (a) $|a+b+c|=|a+(b+c)|\le

|a|+|b+c|\le |a|+|b|+|c|$, using the triangle inequality twice in

a roll. (b) Follow the hint.


\n {\bf \textcolor{red}{[\#3.7*]}} (a) Same as 3.5(a) changing

$\le$ to $<$ in the argument. Alternatively, it is a special case

of 3.5(a). More precisely, for $|b|<a$, we cannot have $b=a$ if

$b\ge 0$ nor $b=-a$ since $-a<-|b|=b$. Thus, $|b|<a$ iff $-a\le

b\le a$ with $b\ne a, -a$ iff $-a<b<a$. (b) By (a),

$|a-b|<c\Longleftrightarrow -c<a-b<c \Longleftrightarrow

b-c<a<b+c$. (c) Same as (b), changing $<$ to $\le$ in the

argument. Alternative, consider the two cases $|a-b|<c$ and

$|a-b|=c$ separately. For the former, $|a-b|<c\Longrightarrow

b-c<a<b+c$ by (b), $\Longrightarrow b-c\le a\le b+c$. Conversely,

$|a-b|<c$ implies $a-b\ne c, -c$, which together with $b-c\le a\le

b+c$ implies  $b-c< a< b+c \Longrightarrow |a-b|<c$ by (b). For

the latter case that $|a-b|=c$, either $a-b=c$ or $-c$, implying

$a=b+c$ or $b-c$, implying $b-c\le a\le b+c$. Conversely, $b-c\le

a\le b+c\Longrightarrow -c\le a-b\le c$. Together with the case

definition $|a-b|=c$ we have the trivial conclusion $|a-b|\le c$.


\n {\bf \textcolor{red}{[\#3.8*]}} Assume instead that $a>b$. Then

$b<2^{-1}(a+b)<a$ because $2b=b(1+1)=b+b<a+b<a+a=2a$. Let

$b_1=2^{-1}(a+b)>b$. Then by the hypothesis we have $a\le

b_1=2^{-1}(a+b)\rightarrow 2a=2\cdot2^{-1}(a+b)=a+b\rightarrow

a\le b$, contradicting the assumption that $a>b$.


\n {\bf \textcolor{red}{[\#4.5]}} Since $s\le m=\sup S$ for all

$s\in S$ and $m\in S$, therefore by definition $m=\max S$.


\n {\bf \textcolor{red}{[\#4.6]}} (a) Since $S\ne \emptyset$,

$\exists  s_0\in S$ s.t. $\inf S\le s_0\le \sup S$. (b) $S$ must

be a one-point set $S=\{a\}$ for some $a\in{\Bbb R}$.


\n {\bf \textcolor{red}{[\#4.7*]}} (a) $\forall s\in S\subset T$,

$\inf T\le s$ by a part of the definition of $\inf T$. This

implies $\inf T$ is a lower bound of $S$. By $\inf S$, we must

have $\inf T\le \inf S$. You then show similarly that $\sup S\le

\sup T$. The part $\inf S\le \sup S$ is from \#4.6. (b) Since $S,

T\subset S\cup T$, by (a), $\sup S, \sup T\le \sup(S\cup T)$ and

$\max\{\sup S,\sup T\}\le \sup(S\cup T)$. One the other hand,

$\forall a\in S\sup T$, either $a\in S$ or $a\in T$, which implies

either $a\le \sup S$ or $a\le \sup T \Longleftrightarrow

a\le\max\{\sup S,\sup T\}$. Therefore, $\max\{\sup S,\sup T\}$ is

an upper bound of $S\cup T$. Because $\sup(S\cup T)$ is the least

upper bound, $\sup(S\cup T)\le \max\{\sup S,\sup T\}$. Together

the established inequality  $\max\{\sup S,\sup T\}\le \sup(S\cup

T)$ we have the equality $\max\{\sup S,\sup T\}=\sup(S\cup T)$.


\n {\bf \textcolor{red}{[\#4.10*]}} By Archimedean Property,

$\exists k\in {\Bbb N}$ s.t. $ka>1$ since $a>0, 1>0$. Thus

$a>\frac{1}{k}$ since $k>0$. Use the property for same pair $1,

a$, $\exists m\in{\Bbb N} \Rightarrow m=m\cdot 1>a$. Let

$n=\max{k,m}$, then $\frac{1}{n}\le \frac{1}{k}<a<m\le n$.


\n {\bf \textcolor{red}{[\#4.11]}} It suffices to show there is an

infinite sequence $a<a_1<a_2<\dots<a_n\dots<b$ with $a_n\in{\Bbb

Q}$. Construct the sequence by induction. By the denseness of

$\Bbb Q$, $\exists a_1\in {\Bbb Q} \Rightarrow a<a_1<b$. Assuming

$a_n$ is constructed such that $a<a_1<a_2<\dots a_n <$, then

applying the same denseness property to the pair $a_n<b$ to have

some $a_{n+1}\in {\Bbb Q}$ with $a_n<a_{n+1}<b$. This completes

the proof.


\n {\bf \textcolor{red}{[\#4.12]}} By the denseness property of

$\Bbb Q$ in $\Bbb R$, we have for this pair $a-\sqrt 2<b-\sqrt 2$

a rational $r\in {\Bbb Q}$ such that $a-\sqrt 2<r<b-\sqrt 2$,

which is $a<r+\sqrt 2<b$. Since $r\in {\Bbb Q}, \sqrt 2\in {\Bbb

I}$, we must $x=r+\sqrt 2\Bbb I$ for otherwise $\sqrt 2=x-r\in\Bbb

Q$ would be a contradiction.


\n {\bf \textcolor{red}{[\#4.16*]}} By the way set $A:=\{r\in{\Bbb

Q}:r<a\}$ is defined, we concluded right away that $a$ is an upper

bound of $A$: $\sup A\le a$. If $a\ne \sup A$, then we must have

$\sup A<a$. By the denseness property of $\Bbb Q$, there is a

$r\in \Bbb Q$ such that $\sup A<r<a$. By definition of $A$, this

$r\in A$ contradicting the implication that $a\le \sup A$.


\n {\bf \textcolor{red}{[\#5.2]}}


\n {\bf \textcolor{red}{[\#5.4*]}} Consider 2 cases separately.

Case of $m=\inf S>-\infty$. Then $\forall s\in S$, $m\le s$ which

implies $-s\le -m$. Thus $-m$ is an upper bound for $-S$.

Moreover, if $A$ is an upper bound of $-S$: $-s\le A \forall s\in

S$, then $-A\le s \forall s\in S$ is an lower bound of $S$, and

$-A\le m=\inf S$ follows. Thus $-m\le A$, and $-m=\sup(-S)$ by

definition and $\inf S=m=-(-m)=-(\sup(-S))$ follows. For the

remaining case that $\inf S=-\infty$ that $S$ is not bounded

below, then $-S$ cannot be bounded above because $M$ is an upper

bound of $-S$ iff $-M$ is a lower bound of $S$. Hence

$\sup(-S)=\infty$, and $\inf S=-\infty=-\sup(-S)$.


\n {\bf \textcolor{red}{[\#5.5]}} The argument is identical to



\n {\bf \textcolor{red}{[\#7.2]}}


\n {\bf \textcolor{red}{[\#7.4*]}} (a) $\sqrt 2/n\to 0$. (b)

$(1+1/n)^n\to e$. $t_{n+1}=(t_n^2+2)/(2t_n), t_1=1$. Then $1\le

t_n\le 2$, and $t_n$ increasing. $\lim t_n=t$ exists, and $t=\sqrt



\n {\bf \textcolor{red}{[\#8.2a,c]}} (a) $\forall \epsilon>0$ let

$N=1/\epsilon$. Then $n>N\Longrightarrow



\n {\bf \textcolor{red}{[\#8.4]}} By assumption,

$\forall\epsilon>0, \exists N>0\ s.t.\  n>N\ \Longrightarrow \


|s_nt_n|=|s_n||t_n|<(\epsilon/M)M=\epsilon$ since $|t_n|<M$ for

all $n$.


\n {\bf \textcolor{red}{[\#8.8(a)*]}} \textcolor{blue}{(This is

another example for how YOUR hand-in homework should look like for

this problem: State the problem, followed by a formal declaration

``Proof'' or ``Solution'' whichever applies.)}



Prove the limit $\lim(\sqrt{n^2+1}-n)=0$.


\n Proof: $\forall \epsilon>0$, let $N=1/\epsilon$. Then for $n>N$

we have



This proves $\lim(\sqrt{n^2+1}-n)=0$ by definition. $\square$


\n {\bf \textcolor{red}{[\#9.2b]}} By Theorems 9.2 and 9.3,

$\lim(3y_n-x_n)=\lim(3y_n+(-1)x_n)=3\cdot 7+(-1)\cdot 3=18$. By

Theorem 9.6, $\lim(3y_n-x_n/y_n=18/7$.


\n {\bf \textcolor{red}{[\#9.4]}} (b) $s_1=1,

s_{n+1}=\sqrt{s_n+1}$. Assume $\lim s_n=s$ exists. Than $\lim

s_{n+1}=\lim s_n=s$. By Example 5 of $\S$ and Theorem 9.3, $s=\lim

s_{n+1}=\lim\sqrt{s_n+1}=\sqrt{\lim s_n+1}=\sqrt{s+1}$. Solving

$s$ we get $s=(1+\sqrt 5)/2$.


\n {\bf \textcolor{red}{[\#9.6a,b]}} (a) Plug in the ``limit'' to

get $a=3a^2\Longrightarrow a=0 \text{ or } a=1/3$. (b) The limit

does not exits because $x_n>3^{n-1}\to\infty $ by induction.


\n {\bf \textcolor{red}{[\#9.8]}}


\n {\bf \textcolor{red}{[\#9.10*]}} (a) By assumption, $\forall

M>0, \exists N,\ s.t.\ n>N\Longrightarrow s_n>M/k>0$ since $k>0$

is a constant. Therefore $ks_n>k(M/k)=M$ for all $n>N$, showing

$ks_n\to\infty$ by definition. (b) $(\Longrightarrow)$ $\forall

M<0,\ \exists N,\ s.t. \ n>N\Longrightarrow s_n>-M$ since $\lim

s_n=+\infty$. Hence we have $-s_n<M$ showing $\lim(-s)=-\infty$ by

definition. Similar argument applies to $(\Longleftarrow)$. Also

for (c).


\n {\bf \textcolor{red}{[\#9.12*]}} (a) Let $\epsilon_0=(1-L)/2>0$

as $L<1$ . By assumption $\exists N_0>0$ such that $\forall n\ge

N_0, ||s_{n+1}|/|s_n|-L|<\epsilon_0 \Longleftrightarrow

L-\epsilon_0<|s_{n+1}|/|s_n|<L+\epsilon_0=(1+L)/2$. Let

$a=(1+L)/2$. Then $L<a<1$, and $|s_{n+1}|/|s_n|<a$ for $n\ge N_0$.

Repeatedly using this inequality for $n, n-1, \dots,

n-(n-N_0)=N_0>\ge N_0$, we have


<a^{n-N_0}|s_{n-(n-N_0)}|=a^{n-N_0}|s_{N_0}|$. Because $N_0$ is

fixed and $a^n\to 0$ as $n\to \infty$ since $0<a<1$, we have

$\forall \epsilon>0, \exists N$ s.t. $n>N\Longrightarrow

a^n<\epsilon a^{N_0}/|s_{N_0}|\Longrightarrow

|s_n|<a^{n-N_0}|s_{N_0}|<\epsilon$. (b) Let $t_n=1/|s_n|$. Then

$t_{n+1}/t_n=1/(s_{n+1}/s_{n})\to 1/L<1$. By (a) $\lim t_n=0$

which is equivalent to $s_n\to\infty$ by Theorem 9.10.


\n {\bf \textcolor{red}{[\#9.14]}} Follow the hints.


\n {\bf \textcolor{red}{[\#9.16]}} Follow the instruction.


\n {\bf \textcolor{red}{[\#10.6*]}} (a) $\forall \epsilon>0$, let

$N=-\frac{\ln\epsilon}{\ln 2}$, then $n\ge m> N$ implies




&\le |s_n-s_{n-1}|+|s_{n-1}-s_{n-2}|+\cdots+|s_{m+1}-s_m|\\



&<2^{-(m-1)}\le 2^{-N}=\epsilon.\\



(b) No. Conterexample: $s_n=\sum_{k=1}^n1/k\to\infty$ as

$n\to\infty$ hence, it cannot be Cauchy for every Cauchy sequence

must be bounded. However $s_{n+1}-s_n=1/(n+1)<1/n$ satisfied.


\n {\bf \textcolor{red}{[\#10.10*]}} (a, b) are straightforward.

(c) Assume $(s_n)$ is not nonincreasing, then there is an $n$ such

that $s_{n+1}>s_n\Longleftrightarrow (s_n+1)/3>s_n

\Longleftrightarrow s_n<1/2$ contradicting $s_n\ge 1/2$ for all

$n$. (d) Since $(s_n)$ nonincreasing and bounded below by $1/2$,

$\lim s_n=s_0\in{\Bbb R}$ exists. Using a limit theorem on

$s_{n+1}=(s_n+1)/3$ we have $s_0=\lim s_{n+1}=\lim (s_n+1)/3=(\lim

s_n+1)/3=(s_0+1)/3\Longleftrightarrow s_0=1/2$.


\n {\bf \textcolor{red}{[\#11.8*]}} (a) Let $S_N=\{s_n:n>N\}$.

Then by Ex.5.4, $\inf

\{s_n:n>N\}=-\sup(-\{s_n:n>N\})=-\sup\{-s_n:n>N\}$. Taking limit

in $N\to\infty$, then by definition and a limit theorem we have



=-\lim_{N\to\infty}\sup\{-s_n:n>N\}=-\limsup (-s_n)$. (b)

Obviously $(-t_k)$ is monotone iff $(t_k)$ is monotone. Then by a

limit theorem and (a) we have


(-s_n))=\liminf s_n$.


\n {\bf \textcolor{red}{[\#11.10*]}} (a) $S=\{1/n:n\in{\Bbb

N}\}\cup\{0\}$. In fact, the $n$th column subsequence converges to

$1/n$, and every row subsequence converges to 0. So

$S\supset\{1/n:n\in{\Bbb N}\}\cup\{0\}$. Moreover, for any number

$a\notin S$, there is a small $\epsilon_0>0$ such that the

interval $(a-\epsilon_0,a+\epsilon_0)$ contains no points of the

sequence $(s_n)$, and therefore $a$ cannot be a subsequential

limit of $(s_n)$, and $S=\{1/n:n\in{\Bbb N}\}\cup\{0\}$ follows.

(b) By inspection, $\limsup s_n=1=\sup S,\liminf s_n=0=\inf S$.


\n {\bf \textcolor{red}{[\#12.2]}} Since $0\le \liminf

|s_n|\le\limsup |s_n|$ for any sequence, then $\limsup|s_n|=0$ iff

$\liminf |s_n|=\limsup |s_n|=0$ iff $\lim|s_n|=0$ iff $\lim



\n {\bf \textcolor{red}{[\#12.4]}} Follow the hint.


\n {\bf \textcolor{red}{[\#12.6]}} Follow the hint.


\n {\bf \textcolor{red}{[\#12.8*]}} Because every sequence has a

subsequence converging to its limsup (Rmk: state known results

rather than theorem, corollary, or lemma numbers from text, such

as Corollary 11.4 in this case. Follow this convention when you

take exams), there is a subsequence $r_{n_k}=s_{n_k}t_{n_k}$ of

the product sequence $r_n:=s_nt_n$ such that

$\lim_{k\to\infty}s_{n_k}t_{n_k}=\limsup s_nt_n$. (This does not

imply $s_{n_k}$ or $t_{n_k}$ convergent!). Because every bounded

sequence has a converging subsequence, and $(s_n),(t_n)$, hence

$(s_{n_k}), (t_{n_k})$ automatically, are bounded, $(s_{n_k})$ has

a converging subsequence $(s_{n_{k_l}})$ to $s\in{\Bbb R}$ (This

does not imply $(t_{n_{k_l}})$ converges). By the same result,

$(t_{n_{k_l}})$ has a converging subsequence $(t_{n_{k_{l_m}}})$

to $t\in{\Bbb R}$. Now we have found convergent subsequences

$(t_{n_{k_{l_m}}})$, $(s_{n_{k_{l_m}}})$. Because all subsequences

of a convergent sequence converge to the same limit, we have





On the other hand, by the limit product theorem we have





&\le \limsup s_n\limsup t_n.



The last inequality holds because $\limsup$ of every sequence is

the least upper bound of all the sequential limits of the

sequence, and the fact that both $s_n, t_n$ are nonnegative.


\n {\bf \textcolor{blue}{A simpler, alternative proof by Kirsty:}}

For any $n>N$ we have $S_N=\sup\{s_n:n>N\}\ge s_n\ge

0,T_N=\sup\{t_n:n>N\}\ge t_n\ge 0$ $\Longrightarrow s_nt_n\le

S_NT_N\Longrightarrow \sup\{s_nt_n:n>N\}\le S_NT_N$. By definition

of $\limsup$ and the product limit theorem, we have



\limsup s_nt_n&=\lim_{N\to\infty}\{s_nt_n:n>N\}\\


&=\limsup s_n\limsup t_n.\qquad\qquad \square




\n {\bf \textcolor{red}{[\#12.12*]}} Following the hint, we have

for any $n>M>N$,



\sigma_n&=\frac{s_1+\cdots +s_n}{n}=\frac{s_1+\cdots

+s_N}{n}+\frac{s_{N+1}+\cdots +s_n}{n}\\

&\le\frac{s_1+\cdots +s_N}{M}+\frac{n-N}{n}\sup\{s_n:n>N\}\\

&\hbox{(for $s_n\ge 0, n>M$)}\\

&<\frac{s_1+\cdots +s_N}{M}+\sup\{s_n:n>N\}\\

&\hbox{(for $\frac{n-N}{n}<1,n>N$ and $s_n\ge 0$)}\\



Since it holds for all $n>M$, it holds for $\sup\{\sigma_n:n>M\}$


\sup\{\sigma_n:n>M\}\le \frac{s_1+\cdots s_N}{M}+\sup\{s_n:n>N\}


By definition and the fact that limits preserve inequality

relations we have $\limsup



s_N}{M}+\sup\{s_n:n>N\}]=\sup\{s_n:n>N\}$. Since this inequality

holds for all $N$,$\limsup

\sigma_n\le\lim_{N\to\infty}\sup\{s_n:n>N\}=\limsup s_n$ follows.

To show $\liminf s_n\le\liminf \sigma_n$, we argue similarly as in

(\ref{3rdinequality}) above as follows:



\sigma_n&=\frac{s_1+\cdots s_n}{n}=\frac{s_1+\cdots

+s_N}{n}+\frac{s_{N+1}+\cdots +s_n}{n}\\

&\ge\frac{n-N}{n}\inf\{s_n:n>N\}\hbox{(for $s_n\ge 0$)}\\



(for $s_n\ge 0,n>M$)}\\



Taking the limits in the order of $M\to\infty$ first and

$N\to\infty$ afterwards gives rise to the required result.


\n {\bf \textcolor{red}{[\#14.4]}} (a) Use Comparison Test.

$1/[n+(-1)^n]^2\le 1/(n-1)^2$ and $\sum_{n=2}^\infty

1/(n-1)^2=\sum_{n=1}^\infty 1/n^2$ converges. (b) The partial sum

$s_n=(\sqrt 2-\sqrt 1)+(\sqrt 3-\sqrt 2)+\cdots


Diverges. (c) By Ratio Test, $|a_{n+1}/a_n|=1/(1+1/n)^n\to

1/e<1\Longrightarrow $ converges.


\n {\bf \textcolor{red}{[\#14.6]}} Let $B$ be an upper bound of

$(|b_n|)$: $|b_n|\le B \forall n$. By Cauchy criterion and the

assumption that $\sum|a_n|<\infty$, we have $\forall \epsilon>0,

\exists N$ s.t. $\forall m>n>N \Longrightarrow

\sum_{k=n+1}^m|a_k|<\epsilon/B$. Hence $\sum_{k=n+1}^m|a_kb_k|\le

B\sum_{k=n+1}^m|a_k|<B\epsilon/B=\epsilon$. This proves by Cauchy

criterion that $\sum a_nb_n$ converges absolutely.


\n {\bf \textcolor{red}{[\#14.8]}} Use this inequality:

$(a+b)^2=a^2+2ab+b^2\ge ab$ and the Comparison Test.


\n {\bf \textcolor{red}{[\#14.12*]}} Since every sequence has a

converging subsequence to its liminf, we have in this case a

subsequence $a_{n_k}$ such that $|a_{n_k}|\to \liminf|a_n|=0$.

Thus, w.l.o.g., we assume $|a_n|\to 0$ as $n\to\infty$. We next

construct a subsequence $a_{n_k}$ such that $|a_{n_k}|\le

\frac{1}{k^2}$ (without specifying, it implies automatically that

$n_1<n_2<\cdots<n_k<\cdots$.) We do this by induction using the

assumption that $a_n\to 0$. By definition, for $\epsilon=1/1^2=1$,

$\exists N$ s.t. $n>N\Longrightarrow

|a_n|=|a_n-0|<\epsilon=1/1^2$. Define $n_1=N+1$. Assume

$a_{n_i},i=1,2,\dots,k$ are found. Then to construct $a_{n_{k+1}}$

we use again the assumption that $a_n\to 0$. To this end, let

$\epsilon=1/(k+1)^2$. Then $\exists N$ s.t. $n>N\Longrightarrow

|a_n|<\epsilon=1/(k+1)^2$. Define $n_{k+1}=\max\{N+1,n_k+1\}$ then

we have $n_{k+1}>n_k$ and $|a_{n_{k+1}}|<1/(k+1)^2$ as required.

Hence by induction $(a_{n_k})$ can be constructed with

$|a_{n_k}|<1/k^2$. Since $\sum \frac{1}{k^2}<\infty$ converges, by

the Comparison Test, $\sum_{k=1}^\infty a_{n_k}$ converges

absolutely, and itself converges as well.


\n {\bf \textcolor{red}{[\#14.14*]}} Let $s_n$ be the $n$th

partial sum of this series $\sum

a_n=\frac{1}{2}+\frac{1}{4}+\frac{1}{4}+\cdots$. Then $s_n$ is a

monotone increasing sequence. Notice that there are exactly

$2^{k-1}$ terms of the form $\frac{1}{2^k}$, all together there

are $1+2+2^2+\cdots 2^{k-1}=2^k-1$ terms for all the terms having

the form $\frac{1}{2^i}$ with $i=1,2,\dots, k$. Hence the

$(2^k-1)$st partial sum is




Hence $s_{2^k-1}=k/2\to\infty$, and $s_n\to\infty$ follows. It is

obvious that $a_n<\frac{1}{n}$, and then by the Comparison Test we

conclude that $\sum \frac{1}{n}$ diverges as well.



\n {\bf \textcolor{red}{[\#15.4*]}} (a) Either by

Comparion/Integral or Comparison Test. By Comparison/Integral

Test, we start off by noticing $\frac{1}{\sqrt{n} \log n}\ge

\frac{1}{n\log n}$. $f(x)=\frac{1}{x\log x}$ is monotone

decreasing for $x\ge 2$. $\sum_{n=2}^\infty\frac{1}{n\log n}\ge

\int_2^\infty \frac{1}{x\log x}dx=\infty$ because $\int

\frac{1}{x\log x}dx =\log\log x$. Therefore by Integral Test,

$\sum_{n=2}^\infty\frac{1}{n\log n}$ diverges, and by Comparison

Test $\frac{1}{\sqrt{n} \log n}\ge \frac{1}{n\log n} =\infty$

diverges as well. By Comparison Test alone, we notice $\log

n<\sqrt n$ for $n\ge 1$, and $\frac{1}{\sqrt{n}\log n}\ge

\frac{1}{n}$. Since $\sum \frac{1}{n}$ diverges, $\sum

\frac{1}{\sqrt{n}\log n}$ diverges. (b) Either by Comparison or

Integral Test. Use Comparison Test we have $\frac{\log n}{n}\ge

\frac{1}{n}$ for $n\ge 3$ (assuming $\log $ the natural

logarithmic, or $n> 10$ if the base 10 logarithmic) and the

divergence follows from the divergence of $\sum \frac{1}{n}$. Use

Integral Test, we check first that $f(x)=\frac{\log x}{x}$ is

monotone decreasing which is the case for $x\ge 3$ since

$f'(x)=\frac{1-\log x}{x^2}<0$. Because $\int_3^\infty\frac{\log

x}{x}dx=\frac{(\log x)^2}{2}|_3^\infty =\infty$, the series $\sum

\frac{\log n}{n}$ diverges as well. (c) Use Integral Test on

$f(x)=\frac{1}{x\log x(\log \log x)}$. (d) Use Integral Test or

Comparison Test. By Integral Test, we use $f(x)=\frac{\log

x}{x^2}$ which is monotone decreasing since $f'(x)=\frac{1-2\log

x}{x^3}<0$ for $x\ge 2$. Also $\int\frac{\log

x}{x^2}dx=-\frac{\log x}{x}+\int \frac{1}{x^2}dx=-\frac{\log

x}{x}-\frac{1}{x}$ using integration by parts. Hence

$\int_2^\infty\frac{\log x}{x^2}dx=\frac{1+\log 2}{2}$ converges.

By Integral Test, the series converges. Use Comparison Test, we

note that $\log n<n^q$ for any fixed $0<q<1$ and sufficiently

large $n>N$. So $\frac{\log n}{n^2}<\frac{n^q}{n^2}=\frac{1}{n^p}$

with $p=2-q>1$. Since $\sum \frac{1}{n^p}$ converges for any

$p>1$, we conclude by the Comparison Test that $\sum \frac{\log

n}{n^2}<\sum \frac{1}{n^p}<\infty$ converges.


\n {\bf \textcolor{red}{[\#15.6*]}} (a) $a_n=1/n$. (b) $\sum

a_n<+\infty, a_n\ge 0$ implies $a_n^2\le a_n$ for all large $n$

since $a_n\to 0$. Comparison Test. (c) $a_n=(-1)^n1/\sqrt{n}$.



\n {\bf \textcolor{red}{[\#17.13*]}} (a) For any $x\in{\Bbb R}$

and $n\in{\Bbb N}$, there is a rational $r_n\in{\Bbb Q}$ such that

$x<r_n<x+1/n$ by the Archimedean Property. Hence the rational

number sequence $r_n\to x$. Also the irrational number sequence

$t_n=r_n+\sqrt{2}/n\to x$. Therefore either $\lim f(r_n)=1\ne

f(x)$ if $x\in{\Bbb R}-{\Bbb Q}$ or $\lim f(t_n)=0\ne f(x)$ if

$x\in{\Bbb Q}$. $f$ is not continuous in both cases. (b) Similar

to (a) when $x\ne 0$. For $x=0$, we always have

$|h(x)-h(0)|=|h(x)|\le |x|$ to which $\epsilon-\delta$ argument

can be easily fashioned.


\n {\bf \textcolor{red}{[\#17.14*]}} If $x\in{\Bbb Q}$, construct

an irrational sequence $t_n\in{\Bbb R}-{\Bbb Q}$ in the same way

as in \#17.13 above so that $t_n\to x$ and $\lim f(t_n)=0\ne

f(x)=1/q$ if $x\ne 0$. If $x\in{\Bbb R}-{\Bbb Q}$, $f(x)=0$ and we

claim for every sequence $x_n\to x$, $\lim f(x_n)\to 0$.

Otherwise, there is a sequence $x_n\to 0$ but $f(x_n)\nrightarrow

0$, and w.l.o.g we assume $|f(x_n)|\ge \epsilon_0>0$ for some

constant $\epsilon_0$. This implies then that $x_n\in{\Bbb Q}$ and

$f(x_n)=1/q_n\ge \epsilon_0$, which in turn implies $0<q_n\le

A=1/\epsilon_0$. Since convergent sequences are bounded,

$|p_n/q_n|=|x_n|\le B$ for some constant $B$. Hence $|p_n|\le

B|q_n|\le BA$. Therefore there are only finitely many parings of

$p_n,q_n$ for $|p_n|\le BA, 0<q_n\le A$. Therefore the sequence

$x_n$ can only take on finitely many values. Since sequence

$(x_n)$ converges, $x_n$ must take on a fixed number for all large

$n$ and that fixed number is on of the rationals: $p_n/q_n$ with

$|p_n|\le BA$ and $0<q_n\le A$. This contradicts to the fact that

$x_n$ converges to an irrational number.


\n {\bf \textcolor{red}{[\#17.17*]}} It is obvious that the

condition is necessary since it is a special case of the

definition that $x_n\to x_0$ implies $f(x_n)\to f(x_0)$.

Conversely, assume the contrary that there is a sequence $x_n\to

x_0$ with $x_n\in{\rm dom}(f)$ but $\lim f(x_n)\nrightarrow

f(x_0)$. Then $\exists \epsilon_0>0$ so that $\forall N, \exists

n\ge N$ with $|f(x_n)-f(x_0)|\ge \epsilon_0$. That is a

subsequence can be found so that $|f(x_n)-f(x_0)|\ge \epsilon_0$.

Therefore w.o.l.g, we assume $x_n$ is such a subsequence. Then we

concluded right away that $x_n\ne x_0$ but $x_n\to x_0$

nonetheless. This contradicts the assumption that we must have

$f(x_n)\to f(x_0)$ whenever $x_n\to x_0$ and $x_n\ne x_0$ for all



\n {\bf \textcolor{red}{[\#18.4*]}} $f(x)=1/(x-x_0)$.


\n {\bf \textcolor{red}{[\#18.5*]}} (a) Let $h=f-g$. Then $h$ is

continuous as both $f$ and $g$ are continuous. Also

$h(a)=f(a)-g(a)\le 0$ and $h(b)=f(b)-g(b)\ge 0$ by assumption.

Then by the Intermediate Value Theorem $h(x_0)=0$ for some

$x_0\in[a,b]$, implying $f(x_0)=g(x_0)$ as required. (b) Let

$g(x)=x$. Then $f(0)\ge 0=g(0)$ and $f(1)\le 1=g(1)$ by the

assumption that $f$ maps $[0,1]$ into $[0,1]$. Hence the

conditions of (a) are satisfied for the given functions $f,g$ and



\n {\bf \textcolor{red}{[\#18.10*]}} Let $g(x)=f(x+1)-f(x), x\in

[0,1]$. Then $g$ is continuous in $[0,1]$ as $f$ is continuous in

$[0,2]$. Also, $g(0)=f(1)-f(0), g(1)=f(2)-f(1)=f(0)-f(1)$ by the

assumption that $f(2)=f(0)$. Therefore $g(0)=-(f(1)-f(0))=-g(1)$,

implying either $g(0)=g(1)=0$ or $g(0)$ and $g(1)$ have opposite

signs. In the latter case there exists a number $x_0\in [0,1]$

such that $g(x_0)=0$ by IVT. In either cases the same result

holds. Therefore with $y_0=x_0+1$ $f(y_0)=f(x_0)$ follows.


\n {\bf \textcolor{red}{[\#19.2*]}} (c) Only. $\forall

\epsilon>0$, let $\delta =\epsilon/4$ s.t. $|x-y|<\delta, x,y\ge

1/2$ implies




\n {\bf \textcolor{red}{[\#19.7*]}} (a) Note first that the

continuity of $f$ on $[0,\infty)$ implies the continuity of $f$ on

any subset, including $[0,k+1]$. Since $[0,k+1]$ is bounded and

closed interval of ${\Bbb R}$ $f$ is uniformly continuous on

$[0,k+1]$. We now show that $f$ is uniformly continuous on

$[0,\infty)$ by definition. $\forall \epsilon>0$, $\exists

\delta_1>0$ s.t. $|x-y|<\delta_1,x,y\in [k,\infty)$ implies

$|f(x)-f(y)|<\epsilon$ by the assumption that $f$ is uniformly

continuous on $[k,\infty)$. Since $f$ is uniformly continuous on

$[0,k+1]$, $\exists \delta_2>0$ s.t. $|x-y|<\delta_2, x,y\in

[0,k+1]$ implies $|f(x)-f(y)|<\epsilon$. Let

$\delta=\min\{1,\delta_1,\delta_2\}$, we claim that $|x-y|<\delta,

x,y\in[0,\infty)$ implies $|f(x)-f(y)|<\epsilon$. To this end all

we need to show is that the condition  $|x-y|<\delta,

x,y\in[0,\infty)$ implies either $x,y\in[0,k+1]$ or $x,y\in

[k,\infty)$. Suppose $x,y\in[0,k+1]$ does not hold. Then either

both $x,y\ge k+1>k$ or one of $x,y$ is in $[0,k+1]$ while the

other is not. In the former case we have $x,y\in [k,\infty)$. In

the latter case, suppose w.o.l.g that $x<k+1\le y$. Then the

condition that $|x-y|<\delta\le 1$ implies that $x=y-(y-x)\ge

y-|y-x|\ge y-\delta> k+1-1=k$. That is, $k<x<y$ and $x,y\in

[k,\infty)$ holds as well. (b) Obviously $f(x)=\sqrt x$ is

continuous in $[0,\infty)$. Because

$f'(x)=\frac{1}{2}\frac{1}{\sqrt x}\le \frac{1}{2}\frac{1}{1}$ for

$x\ge 1$, hence $f$ is uniformly continuous on $[1,\infty)$. By

(a) $f$ is uniformly continuous in $[0,\infty)$.


\n {\bf \textcolor{red}{[\#19.9*]}} (a) $f(x)=x\sin(\frac{1}{x})$

for $x\ne 0$ and $f(0)=0$ is continuous on ${\Bbb R}$. This

follows from the product and composition rules for continuous

functions when $x\ne 0$ and the estimate $|f(x)-f(0)|\le |x|$ at

the point $0$. (b) Let $S$ be any bounded subset of ${\Bbb R}$.

Then $a=\inf S$ and $b=\sup S$ are all finite numbers. Therefore

$f$ is uniformly continuous in the bounded, closed interval

$[a,b]$ since $f$ is continuous there. Hence $f$ is uniformly

continuous on any subset of $[a,b]$ which includes $S$. (c)

Because $f'(x)=\sin(\frac{1}{x})-\frac{1}{x}\cos(\frac{1}{x})$ for

$x\ne 0$ we have $|f'(x)|\le 1+1 =2 $ for $|x|\ge 1$. Hence $f$ is

uniformly continuous in $(-\infty, -1]$ and $[1,\infty)$. Because

$f$ is also uniformly continuous in, say $[-2,2]$, the exactly

same argument for \#19.7(a) can be used to show $f$ is uniformly

continuous in ${\Bbb R}$.


\n {\bf \textcolor{blue}{[Notes Supplement On Riemann Integral]}}

Let $U(\{x_i\})=\sum \sup_{I_i}f\Delta x_i$ be the upper sum of

any partition $a=x_0<x_1<\dots<x_n=b, \Delta x_i=x_i-x_{i-1},

i=1,2,\dots, n, I_i=[x_{i-1},x_i]$. We claim that $\lim_{\Delta

x\to 0}U(\{x_i\})=\ell$ exists where $\Delta x =\max\{\Delta

x_i,i=1,2,\dots, n\}$.


Since $f$ is continuous in $[a,b]$, $\exists \bar x_i\in

[x_{i-1},x_i]$ such that $f(\bar x_i)=\sup_{I_i}f$. Since $f$ is

uniformly continuous in $[a,b]$, then $\forall \epsilon>0$,

$\exists \delta>0$ s.t. $|x-y|<\delta, x,y\in [a,b]$ implies



We now proceed to prove the claim by first developing a background

result. A partition $a=y_0<y_1<\dots y_m=b$ is said to be a {\em

refinement} of a given partition $a=x_0<x_1<\dots<x_n=b$ if

$\{x_i\}$ is just a subset of $\{y_j\}$, i.e., $x_i=y_{j_i}$ for

some $j_i$ and for all $i=0,1,2,\dots, n$. Then the difference

between the corresponding upper sums $U(\{x_i\})-U(\{y_j\})$ has

the following properties.


Either a subinterval $[x_{i-1},x_i]$ contains no refinement points

$y_j$ with $x_{i}=y_{j_i}$ for some $j_i$ and $x_{i-1}=y_{j_i-1}$.

In this case the corresponding summands $\sup_{I_i}f\Delta x_i$

and $\sup_{J_{j_i}}f\Delta y_{j_i}$ are identical and cancel out

each other in the difference $U(\{x_i\})-U(\{y_j\})$.


Or a subinterval $[x_{i-1},x_i]$ contains some refinement points

$x_{i-1}=y_j<y_{j+1}<\cdots<y_{j+k}=x_i$ for some $k>1$. In this

case the summand $\sup_{I_i}f\Delta x_i$ corresponds to the subsum

$\sum_{l=1}^k\sup_{J_{j+l}}f\Delta y_{j+l}$ with

$J_{j+l}=[y_{j+l-1},y_{j+l}]$. Breaking up $[x_{i-1},x_i]$

according to its refinement

$x_{i-1}=y_j<y_{j+1}<\cdots<y_{j+k}=x_i$, the corresponding

difference in absolute value $|\sup_{I_i}f\Delta

x_i-\sum_{l=1}^k\sup_{J_{j+l}}f\Delta y_{j+l}|$ becomes



y_{j+l}|<\sum_{l=1}^k\epsilon\Delta y_{j+l}\le \epsilon \Delta x_i


if $\Delta x_i=x_i-x_{i-1}<\delta$ by the uniformly continuity

since $\sup_{I_i}f=f(\bar x_i)$ and $\sup_{J_{j+l}}f=f(\bar

y_{j+l})$ with $\bar x_i,\bar y_{j+l}\in

[x_{i-1},x_i]\Longrightarrow |\bar x_i-\bar y_{j+l}|\le

x_i-x_{i-1}<\delta$. Hence the upper sum difference in absolute

value $|U(\{x_i\})-U(\{y_j\})|$ on a whole is bounded above by

$\epsilon\sum_{i=1}^n\Delta x_i=\epsilon (b-a)$ for any refinement

of partition $\{x_i\}$ and $\Delta x\le \delta$.


We are now ready to prove the claim $\lim_{\Delta x\to

0}U(\{x_i\})=\ell$. As we did in class we first show that the

sequence $U_n=U(\{x_i\})$ in regular partition $x_i=a+i\Delta

x,\Delta x=(b-a)/n$ has a limit. We do this by showing that

$\{U_n\}$ is a Cauchy sequence. In fact, for $(b-a)/N<\delta$ or

$N>(b-a)/\delta$ and any $m,n>N$, the partition for $U_{mn}$ is a

refinement for both partitions of $U_n$ and $U_m$ because the

partition points of $U_n$ satisfy

$x_i=a+i\frac{b-a}{n}=a+im\frac{b-a}{nm}$ which is a partition

point of $U_{mn}$ for each $i$ and similarly for $U_m$. By what we

have just proved above, $|U_n-U_m|=|U_n-U_{mn}+U_{mn}-U_m|\le

|U_n-U_{mn}|+|U_{mn}-U_m|<2\epsilon (b-a)$ since $\Delta

x=(b-a)/n$ for $U_n$ and $\Delta x=(b-a)/m$ for $U_m$ are both

less than $(b-a)/N<\delta$ for $m,n>N$. This shows $U_n$ is Cauchy

and $\lim U_n=\ell$ follows.


Finally we prove $\lim_{\Delta x\to 0}U(\{x_i\})=\ell$ for all

partition. Assume on the contrary that it is false, then a

sequence of upper Riemann sums $U(\{x_i^k\}), k=1,2,\dots$ can be

found such that $|U(\{x_i^k\})-\ell|>\epsilon_0$ for some fixed

number $\epsilon_0$ even though $\Delta x^k=\max\{\Delta x_i^k,

i=1,2,\dots n_k\}\to 0$ as $k\to 0$. Because $U_n\to \ell$, we

have $N_0>0$ such $n>N_0$ implies $|U_n-\ell|<\epsilon_0/2$ and

thus $|U(\{x_i^k\})-U_n|=|U(\{x_i^k\})-\ell-(U_n-\ell)|\ge


>\epsilon_0/2$ for $n>N_0$ and all $k$. This has to be a

contradiction for the following reasons. For each regular $n$th

partition and any given one in $\{x_i^k\}$, putting all these

points together to form a refinement partition for both $U_n$ and

$U(\{x_i^k\})$ and denote the refinement upper sum by $U_n^k$.

Then when $\Delta x^k,(b-a)/n<\delta$, we have


|U(\{x_i^k\})-U_n^k|+|U_n^kU_n|<2\epsilon (b-a)<\epsilon_0/2$

since $\epsilon$ is arbitrary. This completes the proof.