2
$\begingroup$

The infinite-length sequence $x_1[n]$ defined by \begin{multline} x_1[n]= \begin{cases} \dfrac{1}{n}& \text{if $n \geq $1},\ 0& \text{if $n \leq $0}. \end{cases} \end{multline} has an energy equal to
$\mathcal{E _x {_1}} = \sum^\infty_{n=1}(\dfrac{1}{n})^{2}$
which converges to $\pi^2/6$ indicating that $x_1[n]$ has finite energy.

I don't get where we find $\pi^2/6$. It would be great if anyone can help me out.

  • 2
    http://math.stackexchange.com/q/8337/12422010-12-17
  • 0
    @Qiaochu: I see that you edited the tags. You don't agree that it is a dupe?2010-12-18
  • 1
    @Moron: no. I think the wording of this question in terms of signal processing invites the Fourier-theoretic answer in a way that the question "how do I compute zeta(2)" does not (see in particular the end of George S.'s answer).2010-12-18
  • 0
    @Qiaochu: I disagree that that is reason enough to keep it open. We could always add an answer with the new point of view to the original. Besides, all the answers so far have already appeared in the original. Anyway...2010-12-18
  • 0
    btw, the question specifically asks how the $\pi^2/6$ value comes.2010-12-18

2 Answers 2

4

The sum of the series $\displaystyle\sum^\infty_{n=1}(\dfrac{1}{n})^{2}=\dfrac{\pi^2}{6}$ is a classical result due to Euler. Several proofs are given in the answers to this question.


PS. Here Robin Chapman collects 14 proofs.


PPS. The improper double integral $$\int_{0}^{1}\int_{0}^{1}\left(\dfrac{1}{1-xy}\right) \mathrm{d}x\mathrm{d}y=\int_{0}^{1}\int_{0}^{1}\left(\sum_{n=1}^{\infty }\left( xy\right)^{n-1}\right) \mathrm{d}x\mathrm{d}y=\sum^\infty_{n=1}\dfrac{1}{n^2} =\dfrac{\pi^2}{6}=\zeta(2)$$ is finite, as pointed out in Proofs from THE BOOK by Martin Aigner and Günter Ziegler. The original article by Tom Apostol is here.

  • 0
    at one of proofs I've got $\sum^\infty_{n=1}(xy)^{n-1}=1/(1-xy)$ can you explain this to me?2010-12-17
  • 0
    @Fulwig: there are many ways to understand that identity, but I think the easiest is to note that the LHS is an infinite geometric series. Wikipedia has more: http://en.wikipedia.org/wiki/Geometric_series2010-12-17
  • 0
    Well I do know that $\sum^\infty_{n=1}r^k =(\dfrac{1}{1-r})$ but this is valid for $r < 1$. Does it mean that $xy < 1$ or am I missing something?2010-12-17
  • 0
    @Fulwig: For $x<1$ and $y<1$ the series $\sum_{n=1}^{\infty}(xy)^{n-1}$ is convergent. For $x=y=1$, my understanding is that $\underset{\varepsilon \rightarrow 0}{\lim }\underset{\delta \rightarrow 0}{\lim }\ \int_{0}^{1+\varepsilon }\int_{0}^{1+\delta }\left( \sum_{n=1}^{\infty}\left( xy\right)^{n-1}\right) \mathrm{d}x\mathrm{d}y$ is a double improper convergent integral.2010-12-17
  • 0
    Thanks a lot for all your help!2010-12-17
  • 0
    @Fulwig: Not at all!2010-12-17
  • 0
    A proof using the integral from the Apostol article appeared earlier as an exercise in a number theory text by LeVeque: http://books.google.com/books?id=ocAySqjVLeEC&lpg=PR1&pg=PA122#v=onepage&q&f=false2010-12-18
  • 0
    Thanks for the information. In the article Tom Apostol says: "This evaluation has been presented by the author for a number of years in elementary calculus courses, but does not seem to be recorded in the literature."2010-12-18
3

Since you are into signal processing, you might like the proof using Parseval's theorem. I have paraphrased this into the signal processing language from the original proof #4 of Robin Chapman's collection of proofs.

Let $e_n = e^{2\pi inx}$ where $n \in \mathbb Z$. Let $f(x) = x$ in the interval $[0,1]$ and we compute its Fourier series

$$ f (x) = \sum_n a_n e^{2\pi inx}.$$

Now, in your terms, Parseval's theorem would mean that the energy computation in the time domain is identical to the energy computation in the frequence domain. To compute energy in the time domain, we integrate the square of the abs. value of the function, and to compute energy in the Frequency domain, we sum the squares of the abs. values of the Fourier coefficients. So,

$$ \int_0^1 x^2 dx\ =\ \sum |a_n|^2$$

As R. Chapman remarks, the left side is $1/3$, and we have $a_0 = 1/2$ and $a_n = 1/2\pi in$ for $n \neq 0$. So the above simplifies to

$$ \frac{1}{3} = \frac{1}{4} + \underset{n\in \mathbb Z , n \neq 0}{\sum} \frac{1}{4\pi n^2}$$

from which the result follows.


Incidentally, note here that the explicit calculation was not necessary to prove finite energy. The much simpler way is to note that your signal can be bounded by the Fourier series of some other signal, and then observe that energy of that signal as seen in the time domain is finite.

  • 0
    @Américo Tavares: Your pat on the back makes me very happy, seeing that you are an electrical engineer.2010-12-17
  • 0
    That's right! This ( http://math.stackexchange.com/questions/7924/how-can-i-interpret-energy-in-signals/7943#7943 ) is an answer of mine to a question on "energy" in signals.2010-12-17