5
$\begingroup$

I was rather confused about this proof I came across in Rudin Chapter 7. The premise is: If K is a compact metric space, if $f_n \in C(K)$ for $n = 1, 2, 3, ...,$ ($C(K)$ being a set of complex-valued, continuous and bounded functions), and if $\{f_n\}$ converges uniformly on $K$, then $\{f_n\}$ is equicontinuous on $K$.

Proof: Let $\epsilon > 0$ be given. Since $\{f_n\}$ converges uniformly, there is an integer $N$ such that $$||f_n - f_N|| < \epsilon (n > N).$$ (See Definition 7.14). Since continuous functions are uniformly continuous on compact sets, there is a $\delta > 0$ such that $$|f_i(x) - f_i(y)| < \epsilon$$ if $1 \leq i \leq N$ and $d(x,y) < \delta$.*

If $n < N$ and $d(x,y) < \delta$, it follows that $$|f_n(x) - f_n(y)| \leq |f_n(x) - f_n(y)| + |f_n(x) - f_N(y)| + |f_N(x) - f_N(y)| < 3\epsilon.$$

$QED$.

**

What I don't understand is the line ending with *. I understand that when a family of functions are uniformly continuous, for every $\epsilon > 0$, there exists an integer $N$ such that $n \geq N$ implies $|f_n(x) - f_n(y)| \leq \epsilon$, but it doesn't say anything about the case when $n \leq N$. Perhaps it has something to do with the finiteness of the number of functions?

It might be something really obvious I'm overlooking.

  • 1
    Please recheck the proof carefully, and keep us posted after you do. If your version is like mine, your confusion comes from the fact that you skipped a couple of lines.2010-12-14
  • 0
    There are two lines of text and one formula (number 43 in the book) missing from your quotation of the proof! Maybe if you read those lines too, things will become clearer...2010-12-14
  • 0
    Hey sorry I forgot a few lines, and I still don't understand it :\2010-12-14
  • 0
    There is a typo in *if $n < N$ and $d(x,y)$...*, should be *if $n > N$ and $d(x,y)$*2014-11-27

2 Answers 2

8

Now that you have updated with the filled in lines, it seems the confusion is mixing up the conditions for continuity and convergence. Uniform continuity has nothing to do with taking $n≥N$; that would be convergence. And the line ending with an asterisk has nothing to do with the convergence. What you can do is take $N$ different $\delta$s corresponding to the uniform continuity of each of the first $N$ functions, then take their minimum. As you said, it is possible because there are only finitely many.

Notice that for convergence, you are talking about functions getting closer together at each point (uniformly in this case) as $n$ gets larger, whereas for continuity you are talking about each function taking close values at nearby points. Equicontinuity says that the you can choose your definition of "nearby" the same for all functions in the family simultaneously. The step you are asking about is basically saying that a finite family of uniformly continuous functions is equicontinuous.

  • 0
    Basically, the proof does not end at $|f_i(x) - f_i(y)| < \epsilon$ because the $\textrm{min}(d_i)$ might not exist for infinitely many $f_i$, right?2014-11-27
  • 0
    @Oleg: Yes, it would not follow for an arbitrary sequence of continuous functions. The hypothesis of uniform convergence must be used.2014-11-30
  • 0
    Why is this proof not trivial? If every $f_n$ is continuous, then they are all uniformly continuous on a compact set and therefore they form an equicontinuous family right?2017-02-04
  • 0
    @GuachoPerez: For a finite family of functions on a compact set, yes that will do it. And sure it is trivial after you understand it.2017-02-04
0

Warm-up

What we are trying to prove is this, equicontinuity of a collection of functions. Equicontinuity is property of a collection of functions, not one single function, Definition 7.22.

For any function picked up from this collection (=set) and for any $\epsilon >0$, we can find $\delta >0$ that will work for any function $f_i$ from our equicontinuous collection of functions, i.e.

$$ d(x,y) < 0 \qquad \text{implies} \qquad d(f(x), f(y)) < \epsilon $$

This is a direct proof by construction, and what we need to construct is specific $\delta$.

Metric space

We are in a metric space $\mathcal{C}(k)$ of functions $f$ that map elements of a compact metric space $K$ into the complex plane $\mathcal{C}$. The elements of this metric space $\mathcal{C}(K)$, namely functions $f_n$ are all bounded, continuous and complex-valued.

The metric in this space is sup norm, supremum distance between point zero on the complex plane and values of function across all values of each $f_n$.

Proof

By assumption ${f_n}$ (elements of $\mathcal{C}(K)$ arranged as a sequence), converge uniformly to some function $f$ (although name $f$ is not used in this proof), see Uniform convergence paragraph 4 on page 151. Although it is for $\mathcal{C}(X)$ but we are in $\mathcal{C}(K)$, analogy is OK.

In any metric space a converging sequence is a Cauchy sequence, Theorem 3.11a. Our $\{f_n\}$ converges uniformly, so it converges per se, and therefore it is a Cauchy sequence.

Therefore what you see in (42) is the criterion of a Cauchy sequence. That is, beyond some index $N$, the distance between any two elements becomes arbitrary small. Here as one point we have $f_N$, the other point is any point with index strictly bigger than $N$.

Expression (42) is not "uniform convergence" as Rudin tries to sell, but the Cauchy criterion. In the case of uniform convergence, one of the elements in the difference must be the limit function $f$ to which all $f_n$ converge to.

By assumption $K$ is compact. Continuous functions are uniformly continuous on compact sets, by Theorem 4.19. So each $f_n$ in our collection is uniformly continuous. This means that for each $f_n$ there is certain $\delta >0$

$$ d(x,y) < \delta \quad \text{implies} \quad |f_n(x) - f_n(y)| < \epsilon \qquad (43)$$

Finalizing

Above we identified number $N$. Since $f_n$ with indexes up to $N$ is a finite collection, for any $\epsilon >0$ we can find

$$ \delta = min \{\delta_i : i \le N\} \qquad (A)$$

For functions with indexes bigger than $N$ we can write the triangle inequality, for our metric space with $d(x,y) < \delta$ and $n > N$

$$ \left| f_n(x) - f_n(y) \right| \le \underbrace{|f_n(x) - f_N(x)|}_\text{Cauchy criterion of (42)} + \underbrace{|f_N(x) - f_N(y)|}_\text{continuity of each $f_n$} + \underbrace{|f_N(y) - f_n(y)|}_\text{Cauchy criterion of (42)} < 3\epsilon \qquad (B) $$

This shows that we can tweak $\delta$ so that the sum of the right hand side of (43) for $n \le N$ and (B) for $n > N$ be less than any $\epsilon$.