[Ipopt] Questions on Exact Hessian and Limited Memory

Mahesh N mahesh.mach at gmail.com
Thu Aug 6 14:52:58 EDT 2009


Hello,

We, at ASCEND Team, have been adding support to use IPOPT for NLP problems
in the past few weeks. Now that the solver is fully integrated with ASCEND,
we tested both limited-memory and exact means of calculation of the lagrange
of the hessians. In the process, we observed that limited-memory approach
needed lesser number of iterations than exact hessians approach.

I've posted the median of a series of runs with both limited-memory and
exact hessians below:

EXACT HESSIAN
------------------------

Number of objective function evaluations             = 46
Number of objective gradient evaluations             = 29
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 46
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 29
Number of Lagrangian Hessian evaluations             = 28
Total CPU secs in IPOPT (w/o function evaluations)   =      0.040
Total CPU secs in NLP function evaluations           =      0.008

NUMERICAL HESSIAN
--------------------------------

Number of objective function evaluations             = 40
Number of objective gradient evaluations             = 23
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 40
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 23
Number of Lagrangian Hessian evaluations             = 0
Total CPU secs in IPOPT (w/o function evaluations)   =      0.064
Total CPU secs in NLP function evaluations           =      0.000


Observe that the number of objective function,gradient function,etc
evaluations in the Exact means is higher than numerical Hessians method.

It seems likely that IPOPT could be using the data from the numerical
hessian
evaluation to reduce the number of gradient evaluations,etc although usually
one would expect the lower accuracy of numerical derivatives to slow down
the convergence (i.e.) more number of iterations. Is something like that
happening here?

Thanks,

Mahesh

-- 
Developer
ASCEND Project
http://ascendwiki.cheme.cmu.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://list.coin-or.org/pipermail/ipopt/attachments/20090807/bdf63d7e/attachment.html 


More information about the Ipopt mailing list