[Coin-ipopt] How to guilde the search in IPOPT

Andreas Waechter andreasw at watson.ibm.com
Wed Feb 8 13:23:05 EST 2006


Hello,

> I am using IPOPT-C (2.2.1e) with AMPL to solve a network optimization
> problem. Its performance is very impressive for some topologies. I am trying
> to improve its robustness on various topologies by tuning some parameters of
> IPOPT. My questions are:
>
> 1) Do you have any timeframe to support complementarity formulations in
> IPOPT version 3.0 and above?

Sorry, there are no plans to include complementarity constraints into the
C++ version at this point.  One thing you could do is to use a penalty
formulation to model the complementarities to obtain an NLP and solve that
NLP with Ipopt.  One paper that describes and analyses such a
reformulation is

"Interior Point Methods for Mathematical Programs with Complementarity
Constraints", to appear in SIAM. J. Optimization, with G. Lopez-Calva and
S. Leyffer, 2004

which you can download at

http://www.ece.northwestern.edu/%7Enocedal/PDFfiles/mpec.pdf

Of course, since Ipopt is an open source project, anyone is welcome to
contribute code for handling complementarity constraints :)

> 2) If I know the bound of the objective function, how can I specify it as a
> cut in AMPL for IPOPT? Can the objective cut help IPOPT avoid some
> unnecessary tries (.i.e. speed up the search process)?

I don't see immediately how such a bound could be used in the optimization
procedure.  Ipopt essentially applies Newton's method to the first order
optimality conditions (a set of nonlinear equations), and it tries to find
a feasible point at the same time as it tries to reduce the objective
function.  During the optimization procedure, it might happen that a point
is encountered that is somewhat infeasible and has a lower value of the
objective that the optimal value (and this point might be a good point on
the way to the solution).

> 3) How can I specify "control" variables in AMPL for IPOPT given that I
> don't know how AMPL names these "control" variables?

Sorry, I don't understand what you mean by control variables.

> 4) I try to add some additional (redundant) constraint cuts to speed up the
> search. Sometimes, these cuts indeed speed up the process but sometimes they
> lead to a much worse solution than before. How can I differentiate the
> importance of equations in measuring the constraint violation?

For MILP, adding good cuts is obviously a good idea.  For NLP, it's not
that straight forward.

Adding more constraints to the problem can be helpful sometimes if you
want to avoid that the algorithm generates iterates in certain areas, at
least when the constraints are linear and satisfied at the starting point.
On the other hand, more constraints can also make the problem harder and
increase computational efforts per iteration.

If you obtain worse solutions than before it could be that you added a
constraint that actually decreases the feasible region (i.e., is not a
valid cut), or the algorithm is attracted to a local optimum that is worse
than what you had before.

I don't understand your question "How can I differentiate the importance
of equations in measuring the constraint violation?"  The constraint
violation is the norm of the constraint equations (which are supposed to
be zero at a feasible point).  They are all given the same weight (is that
what you mean by importance?).

Regards

Andreas




More information about the Coin-ipopt mailing list