4
$\begingroup$

This is a problem from Mathematical Analysis by Zorich, from the chapter on continuous functions.

Definition: Let $P_n$ be a polynomial of degree $n$. For a function $f:[a,b]\to\mathbb{R}$, Let $\Delta(P_n) = \sup_{x\in[a,b]} |f(x)-P_n(x)|$. and $E_n(f) = \inf_{P_n} \Delta(P_n)$. A polynomial $P_n$ is the best approximation of degree $n$ of $f$ is $\Delta(P_n) = E_n(f)$.

I have already proved the following:

There exist a polynomial $P_0(x) = a$ of best approximation of degree 0.

If $Q_\lambda(x) = \lambda P_n(x)$ for some fixed polynomial $P_n$. Then there exist a polynomial $Q_{\lambda_0}$ such that $\Delta(Q_{\lambda_0}) = \min_{\lambda\in \mathbb{R}} \Delta(Q_\lambda)$

I'm stuck on proving the following:

If there exists a polynomial of best approximation of degree $n$, there also exists a polynomial of best approximation of degree $n+1$.

My intuition is to prove $\lambda_0 x^{n+1}+P_n$ is the $n+1$ best approximation for $f$. Where $\Delta(\lambda_0 x^{n+1}) = \min_{\lambda\in \mathbb{R}} \Delta(\lambda x^{n+1})$ and $\Delta(P_n) = E(f(x) - \lambda_0 x^{n+1})$.

I don't know if this approach is right. I'm stuck on how to proceed.

4 Answers 4

3

Determining the best $n$th degree approximation explicitely is a very difficult problem and cannot be done by a simple induction (see the chapter on Tschebyscheff approximation in your favorite textbook of numerical analysis). But existence is easy, it follows from general principles: You have to set up a certain continuous function $\Phi$ on a certain compact set $K$ in a high-dimensional space.

  • 0
    For the benefit of search engines: this is the Chebyshev minimax approximation problem: finding the polynomial approximant to a function with "equi-ripple" error.2010-10-26
  • 0
    Thanks. I found counter examples of what I suggested. The proof for the 2nd statement can be generalized easily to n-dimension and lead to the result. But the book didn't go into continuous functions in higher-dimensions yet. Are there any other way to solve this problem?2010-10-26
1

Here is the original problem from the book:

Let $P_n$ be a polynomial of degree $n$. For a function $f:[a,b]\to\mathbb{R}$, let $\Delta(P_n) = \sup_{x\in[a,b]} |f(x)-P_n(x)|$ and $E_n(f) = \inf_{P_n} \Delta(P_n)$; that is, the infimum is taken over all polynomials of degree $n$. A polynomial $P_n$ is called a polynomial of best approximation of degree $n$ if $\Delta(P_n) = E_n(f)$. Show that:

(a) There exists a polynomial of best approximation of degree zero.

(b) Among polynomials $Q_\lambda(x)$ of the form $\lambda P(x)$ where $P(x)$ is a fixed polynomial, there is a polynomial $Q_{\lambda_0}$ such that $\Delta(Q_{\lambda_0}) = \min_{\lambda\in \mathbb{R}} \Delta(Q_\lambda)$

(c) If there exists a polynomial of best approximation of degree $n$, there also exists a polynomial of best approximation of degree $n+1$.

(d) For any $n$, there exists a polynomial of best approximation of degree $n$.

Like the OP, I have no idea how to prove (c) but can prove (d) directly. Formally I didn't use higher dimensions continuity, but actually I did.

Proof. Let $j: (c_0, c_1, ..., c_n) \rightarrow \mathbb R$ such that $j(c_0, c_1, ..., c_n) = \sup |f(x) - \left( c_0 + c_1 x + ... + c_n x^n \right)|$ . As we are trying to find $c_0, ..., c_n$ such that $j$ attains minimum, we may restrict ourselves to work only with $c_0, ..., c_n$ such that $|c_0 + c_1 x + ... + c_n x^n| \le M$ $\forall x \in [a,b]$ for some $M$. Using, for example Lagrange's interpolation formula, this condition means that $|c_i| \le M_1 $ for all $i$. Thus we consider the function $k$ which is the restriction of $j$ on tuples $(c_0,..., c_n)$ satisfying $|c_i| \le M_1$.

By the definition of infimum, there is a sequence $(c_{0m}, c_{1m}, ..., c_{nm}), m \in \mathbb Z$ such that $k(c_{0m}, c_{1m}, ..., c_{nm})$ tends to $t:=\inf k$. Because $c_i$ are bounded, from that sequence we may extract a subsequence $(c_{0p}, c_{1p}, ..., c_{np})$ such that $c_{ip} \rightarrow$ some $t_i$ $\forall i$ when $p$ tends to infinity. Thus when $p$ is large enough, we have $|c_{ip} - t_i| \le \delta$ $\forall i$ and so we have, for each $x$ in $[a,b]$:

$\left| \left|f(x) - (c_{0p} + c_{1p} x + ... + c_{np} x^n) \right| - \left|f(x) - (t_0 + t_1 x + ... + t_n x^n) \right| \right|$ $\le \left| \left( f(x) - (c_{0p} + c_{1p} x + ... + c_{np} x^n) \right) - \left( f(x) - \left(t_0 + t_1 x + ... + t_n x^n \right) \right) \right| $ $=\left| (c_{0p} - t_0) + (c_{1p} - t_1)x + ... + (c_{np} - t_n)x^n \right| $ $\le \delta \left| 1+x+...+x^n \right|$ $\le \delta M_2$ where $M_2$ is some bound for $1+x+...+x^n$ on $[a,b]$.

Therefore $|k(c_{0p}, c_{1p}, ..., c_{np}) - k(t_1, t_2, ..., t_n)|$ $= \left| \sup \left|f(x) - (c_{0p} + c_{1p} x + ... + c_{np} x^n) \right| - \sup \left|f(x) - (t_0 + t_1 x + ... + t_n x^n) \right| \right| $ $\le \delta M_2$ because if each element in some set X oscillates by no more than $\delta M_2$ then set X's supremum oscillates by no more than $\delta M_2$ too. If p is large enough, then $\delta$ is small enough so that $\delta M_2$ can be smaller than any desired $\epsilon$.

What we have done so far is to show that $k(c_{0p}, c_{1p},..., c_{np})$ $\rightarrow k(t_1, t_2, ..., t_n)$. But it is known earlier that $k(c_{0p}, c_{1p},..., c_{np})$ $\rightarrow \inf k$. Thus $\inf k = k(t_1, t_2, ..., t_n) $ and we are done.

1

This is not a complete proof, but rather a funny way of thinking about this problem.

In question you mentioned that you have successfully solved this problem:

If $_{()}=_()$ for some fixed polynomial $_$. Then there exist a polynomial $_{_0}$ such that $Δ(_{_0})= \min_{∈ℝ}Δ(_)$

We will use weaker version of this statement with $\lambda >0$ and introduce a new notation:

1) best distance($P(x)$) := $Δ(_{_0}$), corresponding to this $P(x)$.

2) best polynomial$(P(x)$) := one of the (in general, non-unique) polynomials $_{_i}$ that achieves best distance with maximum norm

(easy to check that both of them exist for any nontrivial polynomial)


Now let us consider equivalence relation on set of polynomials of degree $\le n$ having at least 1 nonzero coefficient:
$P(x) \sim Q(x) \iff P(x) = \lambda Q(x) , \lambda \in \mathbb{R}^{>0}$

On the one hand, polynomials are considered as eqiuvalent if vectors of their coefficients lie on the same ray with initial point zero.

On the other hand, polynomials are considered as equivalent if their $_{_0}$ from previous problem (and also best polynomials) are the same.

Now if we consider a point zero in $\mathbb{R}^n$, in each direction there exists a unique equivalence class corresponding to unique best distance and unique best polynomial.

Let $Bp$ be set of best polynomials (some subset of $\mathbb{R}^n$) and $Bd$ be set of best distances (some subset of $\mathbb{R}^1$)

It would be natural to consider these maps: $$(P_n(x) / \sim ) \xrightarrow{F} S^{n-1} \xrightarrow{G} Bp \xrightarrow{H} Bd$$

And notice that $S^{n-1}$ is compact. Then if $G$ and $H$ are continuous then Bd is compact and we proved initial theorem about best approximation polynomial for all $n \in \mathbb{N}$

-1

I also do not see how to prove the induction step, but it seems easier instead to prove the existence of polynomial of best approximation for all natural degrees. One of possible ways to follow those principles, mentioned by @ChristianBlatter is below.


$\alpha, \beta \in \mathbb{R}^n$

$P(\alpha) = \alpha_0x_0 + ...+\alpha_nx_n$

We can introduce a metric $d$ on bounded functions at $[a,b]$: $$d(f,g) = \sup_{b \le x \le a} |f(x) - g(x)|$$

Denote $\delta \in \mathbb{R}^{>0}$ - neighborhood of point $\alpha \in \mathbb{R}^n $ related to metric $d$ as $U^{\delta}_{d}{(\alpha)}$

Let's consider triangle inequality for $d$ : $$d(f(x), P(\alpha)) \le d(f(x), P(\beta)) + d(P(\alpha), P(\beta) ) \tag{1} $$ $$ \Rightarrow |d(f(x), P(\alpha)) - d(f(x), P(\beta))| \le d(P(\alpha), P(\beta) ) \tag{2}$$

Assume that we have chosen some fixed $\alpha$. Left-hand side of last inequality is actually bounded from the top by oscillation of function $\Delta(\alpha) := |\sup_{x \in [a,b]} {(f(x) - P(\alpha))} |$ at $d(\alpha, \beta)$ - neighbourhood of point $\alpha$ or shortly: $\le \omega_{\Delta(x)}( U^{d(\alpha, \beta)}_{d}{(\alpha)} )$. In other words, given $P(\beta)$ close enough to $P(\alpha)$, their images under $\Delta$ can become as close as we want.

That is why $\Delta$ is continuous map from $\mathbb{R}^n$ to $\mathbb{R}$. Continious function on set is continuous on subset, so our next goal is to construct a compact subset of $S \subset \mathbb{R}^n$ (each point representing a polynomial of degree $\le n$) such that $\forall x \in {\mathbb{R}^n}\setminus \{S\}$ and $ \exists y \in S$ that: $ \Delta(y) \le \Delta(x)$ (this condition means that minimum of $\Delta$ is certainly NOT in ${\mathbb{R}^n}\setminus \{S\}$ if exists).

Let $S$ be a zero-centered ball (with border) in ${\mathbb{R}^n}$ bounded with sphere of radius $r_0 > 2|\sup_{x \in [a,b]} {f(x)}|$ in metric space with metric $d$. Obviously, $\forall x \in {\mathbb{R}^n}\setminus S : d(f,x) \ge |\sup_{x \in [a,b]} {f(x)}| = d(0, f)$ In other words, $0 \in {\mathbb{R}^n}$ is closer to $f$ than any polynomial outside $S$. $\implies$ If minimum of exists, it is in $S$!

$S$ is compact, $\Delta$ is continuous $\implies$ minimum exists.

q.e.d