2
$\begingroup$

I found a contradiction I couldn't resolve by my self. It's about a "Uniform White Noise".

Let ${x}_{t}$ be a "White Noise" i.i.d. Random Process:

$ \forall t \in \mathbb{R}, \ {x}_{t} \sim U[-1, \ 1] $

If we chose to go by the PSD definition of "White Noise" (Constant all over the Frequencies) we get:

$ {R}_{xx}( \tau ) = var({x}_{t}) \delta ( \tau ) $

Yet, Clearly:

$ E[{x}_{t} {x}_{t + \tau}] \underset{ \tau = 0}{=}E[{x}_{t} {x}_{t}]= \frac{1}{3} $

Intuitively, a Process with bounded variance and values can't be "White Noise".
Please mind this is a Continuous Random Process. We don't have such problem in the Discrete case.

What am I missing here? Either there's no such "White Noise" (Why?) or There's a good explanation (Could someone derive it Mathematically) how to get the Delta in The Variance.

Thanks.

1 Answers 1

1

Apparently, saying that $(x_t)$ is a continuous white noise process simply refers to the fact that $(x_t)$ is a continuous time process, that is $(x_t)$ is indexed by a continuous parameter set. That is, the sample paths of $(x_t)$ are not assumed to be continuous, and in fact may be expected to be discontinuous at every fixed point (almost surely). Indeed, consider a continuous Gaussian white noise process. Then, $E[x_s x_t]=0$, $s \neq t$, implies that $x_s$ and $x_t$ are independent, and hence the sample paths must be discontinuous at every point (almost surely). The case of "Uniform White Noise" is essentially the same.

  • 0
    Hello Shai. I don't care about how smooth are the paths. Continuous as I meant means t can get any real value. I want to know how to explain the contradiction between calculating the Variance using the PSD to calculating it by using the Distribution of the Variable at certain time.2010-12-14
  • 0
    By the PSD definition (DC all over the Frequencies) E[Xt Xt] = Variance * Delta -> Infinite Energy. Yet using the Uniform Distribution you get 1/3, with no Delta, Finite Energy.2010-12-14
  • 0
    No, there's a big difference. Using the delta notation means it has infinite amount of variance. A function f(x) which equals to 1 at x = 0 and zero everywhere else isn't a delta. A delta means that at zero its value is infinite. You are using Delta just like Indicator Function, Where it means something else.2010-12-14
  • 0
    I now see your point.2010-12-14
  • 0
    So probably you cannot assume $X_t \sim U[-1,1]$. Maybe the following example can help, in the setting of Gaussian white noise. If $Z_1,\ldots,Z_n$ are i.i.d. ${\rm N}(0,1)$, then their sum is ${\rm N}(0,n)$, which has mean zero but variance which tends to $\infty$.2010-12-14
  • 0
    What I mean is that probably $X_t$ can theoretically be viewed as some limit $X_t^n$ as $n \to \infty$, where each $X_t^n$ has zero mean, but variance tending to $\infty$.2010-12-14
  • 0
    That's what I'm looking for. Someone who could explain it rigorously.2010-12-14
  • 0
    At least you should now realize that there is no contradiction, since $X_t$ is not ${\rm U}[-1,1]$.2010-12-14
  • 0
    That's what I'm looking for. someone to show me why if I started from a R.V. which is U[-1 1] we get Xt which is not.2010-12-15
  • 0
    I think there is a mistake in this answer. $E[x_s x_t]=0$, $s \neq t$ is a much weaker property than independence. Independence requires that $P(x_t) = P(x_t | x_s)$, doesn't it?2014-10-28