When solving questions like these:
Let $f(x)$ be a real function. Find $f(0.1)$ using its Taylor expansion such that the error is less than $10^{-3}$. Find the lowest degree of Taylor polynomial needed.
(if someone can rephrase the question so its more clear that would be great. I didn't quite nail the translation to English)
We were explained that usually the process of solving this is finding some degree that works. Then, depending on how 'far' you are from the error term, you start trying lower degrees. When some degree doesn't work anymore, you say the one before it is the minimal.
I was wondering though, why the implicit assumption that the remainder function is decreasing? i.e. if degree $k$ doesn't suffice, $1, 2, ..., k-2, k-1$ won't work either. Our professor said that we can rely on this because most functions we're dealing with comply with this property. Why is that?