So suppose I want a path from 0 to $c>0$ on the real line, and I am going to use the function $S(t)$ to get there in (discrete) time $T$. That is, my position at time 0 is 0, my position at time $T$ is $c$, and my position $P_t$ changes in the following way:
$$ P_t = (1-\gamma) P_{t-1} + \gamma S(t) $$ where $\gamma<1$ limits fast I can move. Which can be written $$ \frac{P_t - P_{t-1}}{\gamma} = - P_{t-1} +S(t) $$
Sadly, I have a drift towards zero of $-P$, and $S$ has to drive me away from zero, towards $c$. So $S(t)$ is the "size" of my push towards $c$ at date $t$. The cost of $S$ is given by
$$ C(T,S(t)) = \sum_1^{T} (S(t))^2 $$ So I am penalized for large movements in my position, and big pushes towards $c$. I would like to calculate $$ \min_{T,S(\cdot)} C(T,S(t)) $$ subject to $P_0 =0$, $P_T = c$.
That is, if I am free to choose the number of steps it takes to get there, and the path can be whatever I want, what is the cost minimizing route from 0 to $c$?
Clearly, you could set $S(1) = c / \gamma$, so I arrive immediately, at cost $( c/\gamma )^2$. So that is an upper bound on the cost. I could do it in two steps; If I step $x$ the first period, I will need to step $$\frac{(c-(1-\gamma) \gamma x)}{\gamma}$$ the second period, for a total cost of of $$ x^2 + (\frac{(c-(1-\gamma) \gamma x)}{\gamma})^2, $$ Which is minimized for $x=\frac{c (1-\gamma)}{\gamma (2-2 \gamma+\gamma^2 )}$, which therefore costs $$ \frac{c^2 (1-\gamma)^2}{\gamma^2 (2-2 \gamma+\gamma^2)^2}+\frac{(c+\frac{c (1-\gamma) (-1+\gamma)}{2-2 \gamma+\gamma^2})^2}{\gamma^2} $$ which is a lower cost than the one-step option, since $\gamma<1$. It is not clear to me how to extend this analysis to large $T$ - is there a simple recursive form for the cost minimizing $k$-step path, and the problem reduces to choosing the best $k$?