[Ipopt] complementarity constraints and changing bounds during solver run
david greer
davegreer211 at yahoo.co.uk
Tue Feb 4 03:47:17 EST 2014
Hi,
first of all, thanks a lot for providing such a powerful solver and continuing to maintain and support it so actively.
I am trying to solve a problem which contains many complementarity constraints, i.e. constraints of the type
x >= 0
y >= 0
xy = 0
which I believe are awkward for interior point solvers because the solution lies on the boundary and the IP solver is trying to stay strictly inside a feasible region. A well-known way to handle this issue is to replace the final constraint with
xy <= t
for some small t which is then driven to 0 as the solver progresses. Using this approach I have been able to find solutions for successively smaller values of t in separate solver runs, feeding in the solution from each run as the starting point for the next run, but it is a large problem which takes quite some time to converge and I was wondering if it would be possible to modify the value of t during a single solver run, for example by reducing it when the barrier term is lowered.
I believe there was a modification to IPOPT called IPOPT-C which used this approach for complementarity problems, but as I understand it IPOPT-C has not been maintained for some time and was only available in fortran, which I would prefer to avoid.
Is there a way to change the value of a constraint bound (my current implementation has t as a fixed upper bound) during execution, or some neat way to reformulate the problem? I did wonder if it would be possible to do something like make t a variable, fiddle with its derivatives so that the algorithm doesn't try to modify it, and then overwrite its value during the intermediate_callback function?
I am also interested in improving performance. The solver seems to be progressing pretty smoothly to a solution once it finds a feasible region, but appears to take a rather large number of iterations (see https://docs.google.com/document/d/14Ta84ggR4psvTzwpbpr0s26vAx7O0wB0Vca4voAt4m0/edit?usp=sharing for example output). Might this indicate a problem with scaling? If so, what is the easiest way to examine the scaling? I thought about exporting the numerical values of the Hessian for one iteration and examining them, but is there a better way? Previous threads regarding performance on the mailing list often suggest using the derivative checker to test if the derivatives are valid. I have not tried this yet, but the derivatives all come from ADOL-C so I think they should be OK.
Another performance issue I want to look at is that the NLP function evaluation seems to be taking a lot of the time. I used the print_timing_statistics option to investigate further and found that almost all of the time is spent in the Lagrangian Hessian evaluation. Is it possible to get any more detailed information to identify if one particular section of that evaluation is dominating the CPU time?
Best wishes,
David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/ipopt/attachments/20140204/3dec993a/attachment.html>
More information about the Ipopt
mailing list