3
$\begingroup$

What exactly is the variation of a function ? Is it a distace or an element of some space The total of a real valued function $f\colon [0,t] \mapsto \Re $ is as below say $\pi = \{0=t_0,t_1,\cdots , t_n=t\} $ then the $p^{th}$ variation of $f$ is $$ \lim_{\Vert \pi\Vert \to 0}V^p(f)=\sum_{i=1}^n \vert f(t_i)-f(t_{i-1})\vert^p $$

Now say we have $g\colon [0,t] \mapsto \mathcal{X}$ where $\mathcal{x}$ is some linear, normed,Complete space. it seems natural to say variation of $g$ over $[0,t]$ is $$ Z_n=\sum_{i=1}^n \Vert g(t_i)-g(t_{i-1})\Vert^p $$ where $\Vert.\Vert$ is the norm of $\mathcal{X}$

If it exists what is the Total variation in this case is it $\lim_{n \to \infty}Z$ ie a value in $[0,\infty]$ or is it some $x$ such that $\Vert Z_n - x \Vert \to 0 $ say for wiener process $W(s)$ it quadratic variation over $[0,t]$ is $t$ does it mean

$$ \lim_{\Vert \pi\Vert \to 0}V^2(W)=\sum_{i=1}^n \vert W(t_i)-W(t_{i-1})\vert^2 $$ in $L^2(\Omega) $ norm or does it mean some $X \in L^2(\Omega)$

1 Answers 1

4

Consider the case where $W$ is a standard Brownian motion. Given a partition $\pi = \{ 0 = t_0 < t_1 < \cdots < t_n = t\}$ of the interval $[0,t]$, define $$ [W]^\pi = \sum\limits_{i = 1}^n {[W(t_i ) - W(t_{i - 1} )]^2 }. $$ Let $|\pi | = \max _i (t_i - t_{i - 1} )$ denote the mesh of the partition. As is well known, given a sequence of partitions $\pi_n$ with $|\pi_n| \to 0$ as $n \to \infty$, $[W]^{\pi _n } \to t$ in the $L^2$ norm, that is ${\rm E}\big[([W]^{\pi _n } - t)^2 \big] \to 0$. Hence, in particular, $[W]^{\pi _n } \to t$ in probability, and thus, by definition, the quadratic variation of $W$ is given by $[W]_t = t$. The key point here is that the limit $t$ in the right-hand side of $[W]^{\pi _n } \to t$ plays a role of a random variable, rather than of a constant. Of course, in this particular example, $t$ is also a constant, but in general the limit is a random variable, which is defined using convergence in probability.

EDIT: A much simpler yet instructive example is given by the Poisson process $N$. It is readily seen that the variation of $N$ is given by $$ \mathop {\lim }\limits_{|\pi _n | \to 0} \sum\limits_{i = 1}^n {|N(t_{i} ) - N(t_{i-1} )|} = N(t), $$ and the quadratic variation of $N$ is given by $$ \mathop {\lim }\limits_{|\pi _n | \to 0} \sum\limits_{i = 1}^n {[N(t_{i} ) - N(t_{i-1} )]^2} = N(t). $$ You may wish to consider different modes of convergence, as an exercise.

  • 0
    Thanks this clears things up. I wonder why convergence in probability is chosen rather than convergence in $L^2(\Omega)$, i thought it would seem natural to use $L^2$ convergence since this is the norm of the value space.2010-12-30
  • 0
    I'll think about that. One reason may be that it is simpler to prove convergence in probability.2010-12-30
  • 0
    I realised we cant be sure $ (X(t_{i}) - X(t_{i-1}))^2 \in L^2(\Omega) $ so the quadratic variation may not always exist in $L^2(\Omega)$ for wiener process the quadratic variation exists in $L^2$ but there may be other process where it doesnt. Thinking about it if we can find a sequence converging in probability then we can find a subsequence converging a.s. so defining quadratic variation as a limit in probability allow us to consider a larger class of processes2010-12-31
  • 0
    Indeed, you indicated a good reason.2010-12-31