4
$\begingroup$

Motivation: Example. To solve a problem on evaluating the maximum of a product of $n$ real variables subject to an equality constraint on its sum $S$ ($=100$), I used the Lagrange multipliers method (which can be improved by adding the adequate conditions on the second order partial derivatives). Doing so I got a "solution" $x^*=(x_1,x_2,\dots ,x_n)\in \mathbb{R}^{n}$ with $x_1=x_2=\dots=x_n=100/n$. I proceeded by maximizing the single real variable function $u(t)=(100/t)^t$: the maximum is at $36\lt t^*\lt 37$. Finally I computed $u(36)\lt u(37)$. Hence, the optimum occurs at $n=37$.

In spite of having come to the correct solution to the problem, I was told that such an approach does not guarantee the correct solution, in the general case.

Question: In order to see some limitations of treating optimization problems as illustrated by the above example but with greater generality, a few examples in which this method fails would be appreciated.

Remark: I hope this edited text improves the question.


EDIT: The case I have in mind is that of finding the maximum/minimum of an objective function $f(x_1,x_2,\dots,x_n)$ subject to at least one constraint $g(x_1,x_2,\dots,x_n)=0$

This is a kind of generalization of the linear programming simplex problem, two differences being a nonlinear optimization and one integer variable.

  • 0
    Use unrelated functions for each value of n. (At least, I think this will work. It is not clear to me what the last sentence of the first paragraph means.)2010-08-29
  • 0
    The way I'm interpreting what you seem to be looking for, I'd say that using Lagrange multipliers may give a *local optimizer*, but cannot be guaranteed to give a *global optimizer*. Anyway, what is your objective function? If it can be shown to be *unimodal*, then the optimizer you get is indeed the global one.2010-08-29
  • 0
    It appears that this is an integral program, not a real program. The optimum of a function of a set of *integers* is being found by obtaining it over a *real* extension and then rounding to nearby integral coordinates. The question concerns why this might be a bad approach.2010-08-29
  • 0
    @whuber: I think that you explained very well with a few words ("The question concerns why this might be a bad approach"). what my question is.2010-08-29
  • 0
    @Qiaochu Yuan: I edited the text extensively, trying to make the question clear.2010-08-29

2 Answers 2

3

The problem, if I understand it correctly, is best appreciated geometrically by drawing a picture of the level sets of the objective function and the lattice points satisfying the constraints: the optimum (over the reals) can occur at one location, but the nearby lattice points will necessarily lie on inferior contours. Extending those contours identifies a border of almost-optimal locations where the objective is greater than at the lattice point you chose. All you need for a spectacular failure to occur is for there to exist a distant lattice point somewhere within this marginal area: this would correspond to a dramatically different solution satisfying all the constraints and having a better objective value than yours. (Your solution can have arbitrarily bad objective values, too, depending on the gradient of the objective function between the Lagrange multiplier solution and your rounded approximation to it.)

I have encountered exactly this difficulty with certain problems in statistics where the variables count things (and therefore must be integral) and the objective measures the precision of an estimate (and is to be minimized, but the change from maximization to minimization changes nothing essential).

  • 0
    Aha... yes, the problem with rounding to satisfy integer constraints in a Procrustean manner is that the gradient vector at the real point and the gradient vector at the nearest integer point do not even have to point in the same direction! It really depends on how well-behaved the objective function is.2010-08-29
  • 1
    @J. Mangaldan, the objective function can even be linear (and very few things are nicer than that!) and still the approach fail.2010-08-29
3

Here is a contrived example of the failure of the method.

Say you are given the function $f$ defined for $0 \leq x \leq 50$ as follows:

$\displaystyle f(x) = 100 \sin\pi x, 0 \le x \le 1$
$\displaystyle f(x) = x, 1 < x \le 50$.

Your task is to find the integer $n$ for which $f(n)$ is maximized. Your method gives $n=0$ or $n=1$ (max of function occurs at $x = 1/2$), while the correct value we need is $n=50$.

The reason your method works for the example you have is that the function $\displaystyle S(t) = (100/t)^t$ is strictly increasing for $0 < t < 100/e$ and strictly decreasing for $100/e < t$.

  • 0
    Please see my new edit after you have answered. What is the equality constraint, in your example?2010-08-29
  • 0
    @Américo, use $1=1$ as the equality constraint, if you must have one!2010-08-29
  • 0
    @Americo: What Mariano said. Or if you want, $\prod_{k=0}^{50}(x-k) = 0$.2010-08-30
  • 0
    @Mariano Suárez-Alvarez: That is good! I've never thought of such a general case.2010-08-30
  • 0
    @Mariano Suárez-Alvarez: true, 1 = 1 is technically an "equality constraint," but the mathematical interest in constraints is due in part to the fact that a collection of constraints can confine the feasible region to a *compact* space and in part to the existence of boundaries and even corners in that space (it is a manifold-with-corners); those corners can play an important role in optimization algorithms.2010-08-30
  • 1
    @whuber, I am only suggesting "1=1" so as to turn Moron's example into the shape Américo wanted, in order to underline the fact that the presence or nor of an "equality constraint" is orthogonal to the issue at hand.2010-08-30
  • 0
    @Mariano: I see. Thanks.2010-08-30