[Ipopt] complementarity constraints and changing bounds duringsolver run
Tony Kelman
kelman at berkeley.edu
Tue Feb 4 18:45:36 EST 2014
David,
It’s difficult to tell why the Hessian evaluation is so time-consuming without more detailed profiling of what ADOL-C is doing. You can try setting the Ipopt option hessian_approximation to limited_memory, which can be less expensive than evaluating the exact second derivatives but since it is only a Quasi-Newton approximation Ipopt will likely require more iterations to converge.
Regarding complementarity problems, if there is a way to modify constraints from iteration to iteration I wouldn’t expect Ipopt to perform very robustly. If you’re only solving approximate problems based on a relaxation parameter, I would think you don’t need an exact tight-tolerances solution to the approximate problems in order to get a good starting point for the next outer iteration with a lower relaxation parameter... could you terminate early after an iteration or time limit or looser tolerance and still get a useful point?
Your understanding on the complementarity-supporting version of Ipopt only being implemented in Fortran matches what I know about it. You might want to look into a more general-purpose solver like SCIP http://scip.zib.de/ or using a mixed-integer formulation of the complementarity constraints with Bonmin/Couenne.
-Tony
From: david greer
Sent: Tuesday, February 04, 2014 6:47 AM
To: ipopt at list.coin-or.org
Subject: Re: [Ipopt] complementarity constraints and changing bounds duringsolver run
I think I've answered one of my own questions - I made a small change to improve the scaling of my problem formulation and the iteration count went down by a factor of about 10.
I'm still worried about the Lagrangian Hessian evaluation taking about 99% of the function evaluation time, and about 85% of the overall runtime. The number of non-zero elements in the Hessian is comparable to the NNZ of the constraint Jacobians, and I would naively have thought that calculating each element would be a similar amount of work so does this suggest that something very inefficient is happening in my Hessian evaluation?
On Tuesday, 4 February 2014, 8:47, david greer <davegreer211 at yahoo.co.uk> wrote:
Hi,
first of all, thanks a lot for providing such a powerful solver and continuing to maintain and support it so actively.
I am trying to solve a problem which contains many complementarity constraints, i.e. constraints of the type
x >= 0
y >= 0
xy = 0
which I believe are awkward for interior point solvers because the solution lies on the boundary and the IP solver is trying to stay strictly inside a feasible region. A well-known way to handle this issue is to replace the final constraint with
xy <= t
for some small t which is then driven to 0 as the solver progresses. Using this approach I have been able to find solutions for successively smaller values of t in separate solver runs, feeding in the solution from each run as the starting point for the next run, but it is a large problem which takes quite some time to converge and I was wondering if it would be possible to modify the value of t during a single solver run, for example by reducing it when the barrier term is lowered.
I believe there was a modification to IPOPT called IPOPT-C which used this approach for complementarity problems, but as I understand it IPOPT-C has not been maintained for some time and was only available in fortran, which I would prefer to avoid.
Is there a way to change the value of a constraint bound (my current implementation has t as a fixed upper bound) during execution, or some neat way to reformulate the problem? I did wonder if it would be possible to do something like make t a variable, fiddle with its derivatives so that the algorithm doesn't try to modify it, and then overwrite its value during the intermediate_callback function?
I am also interested in improving performance. The solver seems to be progressing pretty smoothly to a solution once it finds a feasible region, but appears to take a rather large number of iterations (see https://docs.google.com/document/d/14Ta84ggR4psvTzwpbpr0s26vAx7O0wB0Vca4voAt4m0/edit?usp=sharing for example output). Might this indicate a problem with scaling? If so, what is the easiest way to examine the scaling? I thought about exporting the numerical values of the Hessian for one iteration and examining them, but is there a better way? Previous threads regarding performance on the mailing list often suggest using the derivative checker to test if the derivatives are valid. I have not tried this yet, but the derivatives all come from ADOL-C so I think they should be OK.
Another performance issue I want to look at is that the NLP function evaluation seems to be taking a lot of the time. I used the print_timing_statistics option to investigate further and found that almost all of the time is spent in the Lagrangian Hessian evaluation. Is it possible to get any more detailed information to identify if one particular section of that evaluation is dominating the CPU time?
Best wishes,
David
--------------------------------------------------------------------------------
_______________________________________________
Ipopt mailing list
Ipopt at list.coin-or.org
http://list.coin-or.org/mailman/listinfo/ipopt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/ipopt/attachments/20140204/35fb9082/attachment.html>
More information about the Ipopt
mailing list