[Ipopt] Hessian of the Lagrangian

Sylvain Miossec sylvain.miossec at gmail.com
Mon Aug 9 22:05:00 EDT 2010


Hi Michaël,

I can answer your second question about BFGS. Only L-BFGS is provided 
because I think IPOPT is primarily designed for very large problems. I 
developped a maybe not very clean implementation of BFGS inside the user 
provided functions (saving gradients from gradient functions, and 
reusing it in eval_h function to compute the Hessian with BFGS). I did 
not feel like modifying IPOPT itself. If you are interested, I can send 
you this code and maybe you can adapt it in IPOPT or use it as it is.
I should add that I noticed better and faster convergence with BFGS than 
with L-BFGS (default options) for problems with about 200 variables.

I should add also, that at some point I was thinking to implement a 
sparse-BFGS algorithm for separable problems. I think such algorithm 
would be much usefull in IPOPT, since it builds a sparse approximation 
of the Hessian (BFGS and L-BFGS are not), which would need less 
computation to work with, and maybe allow better convergence. Paper
Kim, S., Kojima, M., Toint, P. L., Recognizing underlying sparsity in 
optimization
presents the method. Algorithm seems not very difficult. Implementing 
such algorithm in IPOPT would need to change interfaces of at least 
eval_grad_f, and user would have to propose a decomposition of its 
criteria (and maybe constraints). I have no time to do it myself now. 
This algorithm seems not used very often, and seems not well known, so I 
am just proposing the idea.

Best regards,

Sylvain Miossec


Le 05/08/2010 21:27, Michaël Baudin a écrit :
> Hi,
>
> I and Yann Collette are working at the connection between
> Scilab and ipopt.
> I am searching informations about the use of the
> Hessian matrix by ipopt and, since, I did not find what I
> am looking for, I am posting this message.
>
> (1) If the eval_h method is defined, but returns false, what is
> used by ipopt for the Hessian ?
>
> In the tutorial example for Hock and Schittkowski #71,
> the eval_h function implements the Hessian and returns the
> boolean value "true".
>
> I read the section : "Introduction to IPOPT/Special Features/Quasi-Newton
> Approximation of Second Derivatives" at :
>
> http://www.coin-or.org/Ipopt/documentation/node54.html
>
> but did not find what I was looking for. The C and fortran cases are
> clearly explained, though. But nothing is said in the case
> where the C++ method is defined, but has no hessian
> to provide. (In our Scilab connector, we always define the method,
> but switch depending on the existence of the macro defined by the
> user : this is dynamical.)
>
> I found that some C++ source codes return false, as for example :
> https://projects.coin-or.org/Ipopt/browser/trunk/Ipopt/contrib/JavaInterface/jipopt.cpp
>
> (2) Is there a BFGS algorithm provided in ipopt ?
> The documentation is clear about the L-BFGS algorithm, but
> there seem to be no BFGS. Am I right ?
>
> Best regards,
>
> Michaël Baudin
>
>    



More information about the Ipopt mailing list