Motivation: Example. To solve a problem on evaluating the maximum of a product of $n$ real variables subject to an equality constraint on its sum $S$ ($=100$), I used the Lagrange multipliers method (which can be improved by adding the adequate conditions on the second order partial derivatives). Doing so I got a "solution" $x^*=(x_1,x_2,\dots ,x_n)\in \mathbb{R}^{n}$ with $x_1=x_2=\dots=x_n=100/n$. I proceeded by maximizing the single real variable function $u(t)=(100/t)^t$: the maximum is at $36\lt t^*\lt 37$. Finally I computed $u(36)\lt u(37)$. Hence, the optimum occurs at $n=37$.
In spite of having come to the correct solution to the problem, I was told that such an approach does not guarantee the correct solution, in the general case.
Question: In order to see some limitations of treating optimization problems as illustrated by the above example but with greater generality, a few examples in which this method fails would be appreciated.
Remark: I hope this edited text improves the question.
EDIT: The case I have in mind is that of finding the maximum/minimum of an objective function $f(x_1,x_2,\dots,x_n)$ subject to at least one constraint $g(x_1,x_2,\dots,x_n)=0$
This is a kind of generalization of the linear programming simplex problem, two differences being a nonlinear optimization and one integer variable.