[Ipopt] Using IPOPT to solve NLP subproblems generated by an outer-approximation algorithm

Stefan Vigerske stefan at math.hu-berlin.de
Fri Feb 17 12:47:10 EST 2012


> I'll definitely check what Pierre Bonami and collaborators have implemented in Bonmin. However, preliminary tests with Bonmin (default options/settings) were not encouraging - Bonmin could not find an integer feasible point while DICOPT (and SBB) could.

Bonmin implements several algorithms, the default NLP-based 
branch-and-bound is best for nonconvex models, but if you have a convex 
MINLP, you may want to try setting the bonmin.algorithm option to 
different values (see the manual). E.g., B-OA is an algorithm similar to 
DICOPT, B-ECP is an algorithm that does not need much NLP solves, B-Hyb 
can also be an interesting option.

> As far as supplying exact first and second order derivatives, GAMS is managing all that and I'm not familiar with how it calculates derivatives (my guess: automatic differentiation). Initialization of primal and dual variables for a NLP subproblem is done by DICOPT after solving the corresponding MIP subproblem. As far as linear algebra packages, the GAMS version I'm using (the latest IDE for Windows32) is calling IPOPT version 3.8stable with MUMPS as linear solver. I'll tests different linear solvers.

Also try upgrading to the new GAMS 23.8beta, it also includes a new 
Bonmin version.
Ipopt in GAMS <= 23.7 only includes Mumps, with 23.8beta there is also 
an option to use MA27 and MA57, but you would need to buy an extra 
license for that. Alternatively, if you have a library with MA27 or 
MA57, then you could use that with Gams/Ipopt. It's relatively easy to 
build such a dynamic library in Linux (see ipopt manuals).
With 23.8beta, GAMS/Ipopt also includes Metis, which may be helpful when 
using Mumps, too.
GAMS/Ipopt includes optimized blas/lapack, at least for Linux and Windows.

> For your information, in an attempt to get better calculation times (hopefully without sacrificing solution quality) I am currently experimenting with the inexact solution of NLP subproblems (as suggested in Li and Vicente, 2011, but in a non-convex setting).
> All that being said, I'd be very grateful for some advice on fine-tuning IPOPT options for performance since I am far from being an experienced user.

To add to Andreas' mail, GAMS already sets mu_strategy to adaptive, so 
you may want to try setting it to monotone instead.


> Thanks again,
> Etienne
> P.S.: zhangj71, thanks for your reply; unfortunately its contents were empty.
> --------------------------------------------------------------------------------
> From: Rodrigo Lopez-Negrete [mailto:r.lopez.negrete at gmail.com]
> Sent: February 16, 2012 5:39 PM
> To: Ayotte-Sauvé, Étienne
> Subject: Re: [Ipopt] Using IPOPT to solve NLP subproblems generated by an outer-approximation algorithm
> Hi Etienne,
> You might want to check out what the Bonmin people have done. They changed some of the Ipopt options to address some of those issues, and they've actually wrote a very nice solver that does what you're already trying to do.... https://projects.coin-or.org/Bonmin
> On the other hand, some things you can do to speed things up are: supply exact first and second derivatives, do warm starting of both primal and dual variables for each NLP, provide optimized lapack/blas subroutines during compilation, the combination of ma57 and metis also makes things faster.
> I hope this helps.
> Best,
>   Rodrigo
> 2012/2/16 Ayotte-Sauvé, Étienne<Etienne.Ayotte-Sauve at rncan-nrcan.gc.ca>
> Hi all,
> Recently I've implemented a (non-convex) MINLP problem in GAMS and I've started carrying out numerical tests with an outer-approximation algorithm (DICOPT) using CBC for LP/MIP/RMIP subproblems.
> Preliminary tests using an SQP algorithm (SNOPT) for NLP subproblems showed that many of the binary variable choices stemming from the MIP subproblems yielded infeasible NLPs. My first reaction after inspecting the associated solver log files was to add integer cuts to my problem formulation in order to (hopefully) diminish the number of infeasible NLPs.
> After adding these cuts, SNOPT still found infeasible NLP subproblems. However, switching from SNOPT to IPOPT (with the new problem formulation) showed that all these NLP subproblems were "false infeasibles", i.e. IPOPT found a feasible point for each such NLP subproblem.
> As a consequence, I am currently using DICOPT + IPOPT + CBC. So far, the objective function values obtained at the DICOPT output are good (i.e. the relative optimality gap is smaller than 10% with respect to the relaxed MINLP) and I have not yet encoutered an infeasible NLP subproblem generated by DICOPT.
> However, the calculation times I obtain with IPOPT are sometimes in the order 60 seconds for a single NLP subproblem. As suggested in older posts by other members of this message board, I have tried using the acceptable_* options to diminish IPOPT calculation times; the performance obtained thus far is not to my satisfaction.
> I would like to know if any of you have experience in "speeding up" IPOPT and if so, then I would be very happy to read your suggestions.
> Here's an excerpt from an IPOPT log file associated to an NLP subproblem generated by DICOPT.
> iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
> 1580 1.5515722e+001 5.55e-009 6.16e-002 -10.6 4.71e-004  -1.6 1.00e+000 1.00e+000h  1
> 1581 1.5515722e+001 7.81e-010 2.30e-002 -10.6 1.77e-004  -1.1 1.00e+000 1.00e+000h  1
> 1582 1.5515722e+001 2.42e-009 3.14e+001 -11.0 5.75e-004  -1.6 1.00e+000 5.42e-001h  1
> 1583 1.5515722e+001 9.89e-010 1.09e+002 -11.0 1.98e-004  -1.2 6.43e-002 1.00e+000h  1
> 1584 1.5515722e+001 8.90e-009 1.00e-001 -11.0 5.94e-004  -1.7 1.00e+000 1.00e+000h  1
> 1585 1.5515722e+001 1.25e-009 3.74e-002 -10.7 2.23e-004  -1.2 1.00e+000 1.00e+000h  1
> 1586 1.5515722e+001 3.94e-009 3.39e+001 -11.0 7.93e-004  -1.7 1.00e+000 5.48e-001h  1
> 1587 1.5515722e+001 1.58e-009 4.14e+001 -11.0 2.50e-004  -1.3 6.31e-001 1.00e+000h  1
> 1588 1.5515722e+001 1.42e-008 1.27e-001 -11.0 7.49e-004  -1.8 1.00e+000 1.00e+000h  1
> 1589 1.5515722e+001 2.00e-009 4.73e-002 -11.0 2.80e-004  -1.3 1.00e+000 1.00e+000h  1
> iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
> 1590 1.5515722e+001 3.11e-009 6.00e+001 -11.0 1.02e-003  -1.8 1.00e+000 3.09e-001h  1
> 1591 1.5515722e+001 2.54e-009 1.26e+001 -11.0 3.15e-004  -1.4 8.57e-001 1.00e+000h  1
> 1592 1.5515722e+001 2.52e-009 1.96e+003 -11.0 2.36e-002  -1.9 1.00e+000 6.46e-003H 1
> 1593 1.5515722e+001 3.21e-009 1.34e+002 -11.0 3.54e-004  -1.4 9.31e-001 1.00e+000f   1
> 1594 1.5515722e+001 1.12e-009 2.23e-001 -11.0 3.46e-004  -1.0 1.00e+000 1.00e+000h  1
> 1595 1.5515722e+001 4.06e-009 6.72e-002 -11.0 3.97e-004  -1.5 1.00e+000 1.00e+000h  1
> 1596 1.5515722e+001 5.71e-010 2.51e-002 -11.0 1.49e-004  -1.1 1.00e+000 1.00e+000h  1
> 1597 1.5515722e+001 5.14e-009 7.55e-002 -11.0 4.45e-004  -1.5 1.00e+000 1.00e+000h  1
> 1598 1.5515721e+001 7.23e-010 2.82e-002 -11.0 1.67e-004  -1.1 1.00e+000 1.00e+000h  1
> 1599 1.5515721e+001 6.51e-009 8.49e-002 -11.0 5.00e-004  -1.6 1.00e+000 1.00e+000h  1
> iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
> 1600 1.5515721e+001 9.15e-010 3.17e-002 -11.0 1.87e-004  -1.2 1.00e+000 1.00e+000h  1
> 1601 1.5515721e+001 8.23e-009 9.55e-002 -11.0 5.61e-004  -1.6 1.00e+000 1.00e+000h  1
> I apologize in advance for not sending much information about the problem. Confidentiality issues...
> Thanks in advance for your help.
> Etienne Ayotte-Sauve
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt

Stefan Vigerske
Humboldt University Berlin, Numerical Mathematics

More information about the Ipopt mailing list