4
$\begingroup$

So suppose I want a path from 0 to $c>0$ on the real line, and I am going to use the function $S(t)$ to get there in (discrete) time $T$. That is, my position at time 0 is 0, my position at time $T$ is $c$, and my position $P_t$ changes in the following way:

$$ P_t = (1-\gamma) P_{t-1} + \gamma S(t) $$ where $\gamma<1$ limits fast I can move. Which can be written $$ \frac{P_t - P_{t-1}}{\gamma} = - P_{t-1} +S(t) $$

Sadly, I have a drift towards zero of $-P$, and $S$ has to drive me away from zero, towards $c$. So $S(t)$ is the "size" of my push towards $c$ at date $t$. The cost of $S$ is given by

$$ C(T,S(t)) = \sum_1^{T} (S(t))^2 $$ So I am penalized for large movements in my position, and big pushes towards $c$. I would like to calculate $$ \min_{T,S(\cdot)} C(T,S(t)) $$ subject to $P_0 =0$, $P_T = c$.

That is, if I am free to choose the number of steps it takes to get there, and the path can be whatever I want, what is the cost minimizing route from 0 to $c$?

Clearly, you could set $S(1) = c / \gamma$, so I arrive immediately, at cost $( c/\gamma )^2$. So that is an upper bound on the cost. I could do it in two steps; If I step $x$ the first period, I will need to step $$\frac{(c-(1-\gamma) \gamma x)}{\gamma}$$ the second period, for a total cost of of $$ x^2 + (\frac{(c-(1-\gamma) \gamma x)}{\gamma})^2, $$ Which is minimized for $x=\frac{c (1-\gamma)}{\gamma (2-2 \gamma+\gamma^2 )}$, which therefore costs $$ \frac{c^2 (1-\gamma)^2}{\gamma^2 (2-2 \gamma+\gamma^2)^2}+\frac{(c+\frac{c (1-\gamma) (-1+\gamma)}{2-2 \gamma+\gamma^2})^2}{\gamma^2} $$ which is a lower cost than the one-step option, since $\gamma<1$. It is not clear to me how to extend this analysis to large $T$ - is there a simple recursive form for the cost minimizing $k$-step path, and the problem reduces to choosing the best $k$?

1 Answers 1

4

For fixed $T$, or $k$ which is the same thing, it turns out that you are just minimizing a quadratic function with a linear constraint, and a simple solution exists. However, there is no best $k$ -- you can make the cost arbitrarily small by taking longer and longer to get there.

Let $s$ be the vector with components $s_i = S(i)$ for $i = 1, \ldots, k$. Then if you expand out the recursion in your first equation $P_k = \gamma s_k + (1-\gamma) P_{k-1}$, you will find that $$\begin{align} P_k &= \sum_{i=1}^k \gamma(1-\gamma)^{k-i} s_i + (1-\gamma)^k P_0 \\ &= b^Ts + 0 \end{align}$$ where $b$ is a vector whose components are $b_i = \gamma(1-\gamma)^{k-i}$. So your problem is that of minimizing $C(s) = \sum_{i=1}^k s_i^2 = s^Ts$, subject to the constraint $b^Ts = c$. Using Lagrange multipliers, you can get a straightforward solution. However, you will find that the cost decreases asymptotically with $k$.

  • 0
    Great answer, I didn't see the essential linearity of the constraint.2010-11-02