[Ipopt] Questions on Exact Hessian and Limited Memory

Andreas Waechter andreasw at watson.ibm.com
Fri Aug 7 12:22:21 EDT 2009


Hi Mahesh,

Interesting observations...

First let's make sure we are on the same page regarding one thing:  The 
limited-memory approximation is not a numerical Hessian in the sense that 
one might perform finite differences to estimate the second derivatives. 
This is not what is implemented in Ipopt (although a user could write code 
like this and give the outcome of that to Ipopt as the "exact" Hessian). 
When you choose hessian_approximation=limited-memory in Ipopt, it uses a 
limited-memory version of the BFGS quasi-Newton updates - here is it not 
the goal to achieve an exact approximation of the real Hessian, but only 
something that is good enough to give good steps.  But I assume that you 
meant the hessian_approximation=limited-memory runs when you talk about 
"NUMERICAL HESSIAN"?  Also, since this limited-memory option does not do 
finite differences, you would not see an increase in the number of 
gradient evaluations per Ipopt iteration.

When I use Ipopt with the limited-memory BFGS approximation (L-BFGS), I 
usually observe that the algorithm starts to take more steps compared to 
when I provide the exact second derivatives (this of course assumes that 
the second derivatives are computed correctly - I assume that you verified 
your implementation using the derivative checker...).  For one, the 
algorithm usually needs longer to get close to a local solution (and the 
larger the problem is, the more iterations might be required for that), 
but secondly Ipopt has trouble to determine when it is done, even if it is 
already close to the solution.  The reason for this is that the 
multipliers (that appear in the Hessian of the Lagrangian) are not 
converging well, and therefore the termination test takes a long time to 
be satisfied, even though the primal variables are very close to the 
solution.  I haven't really found a good solution for this yet, so what I 
do is to use the "acceptable" tolerances in Ipopt allow an "early" stop, 
and finding good values for those is probably problem dependent.

But your observation is contrary to that.  One possible explanation is 
that this issue with slowly converging multipliers is not happening when 
the constraints are linear...  In that case, Ipopt seems to have less 
problems to figure out when it should terminate than in the L-BFGS case. 
Are your constraints linear?  The different in the number of iterations is 
something like 40 vs 46 in what you wrote, so one option is not much 
faster than the other.

To summarize:  I agree that usually the limited-memory Hessian 
approximation requires more Ipopt iterations (for two reasons, and of 
those the one related to terminating too late might be handles with Ipopt 
termination options), but I also have seen cases where this was not the 
case, particularly when the problem had linear constraints, or if the 
Hessian is "simple" (does not change much, diagonal, etc.)

Hope this helps,

Andreas


On Fri, 7 Aug 2009, Mahesh N wrote:

> Hello,
>
> We, at ASCEND Team, have been adding support to use IPOPT for NLP problems
> in the past few weeks. Now that the solver is fully integrated with ASCEND,
> we tested both limited-memory and exact means of calculation of the lagrange
> of the hessians. In the process, we observed that limited-memory approach
> needed lesser number of iterations than exact hessians approach.
>
> I've posted the median of a series of runs with both limited-memory and
> exact hessians below:
>
> EXACT HESSIAN
> ------------------------
>
> Number of objective function evaluations             = 46
> Number of objective gradient evaluations             = 29
> Number of equality constraint evaluations            = 0
> Number of inequality constraint evaluations          = 46
> Number of equality constraint Jacobian evaluations   = 0
> Number of inequality constraint Jacobian evaluations = 29
> Number of Lagrangian Hessian evaluations             = 28
> Total CPU secs in IPOPT (w/o function evaluations)   =      0.040
> Total CPU secs in NLP function evaluations           =      0.008
>
> NUMERICAL HESSIAN
> --------------------------------
>
> Number of objective function evaluations             = 40
> Number of objective gradient evaluations             = 23
> Number of equality constraint evaluations            = 0
> Number of inequality constraint evaluations          = 40
> Number of equality constraint Jacobian evaluations   = 0
> Number of inequality constraint Jacobian evaluations = 23
> Number of Lagrangian Hessian evaluations             = 0
> Total CPU secs in IPOPT (w/o function evaluations)   =      0.064
> Total CPU secs in NLP function evaluations           =      0.000
>
>
> Observe that the number of objective function,gradient function,etc
> evaluations in the Exact means is higher than numerical Hessians method.
>
> It seems likely that IPOPT could be using the data from the numerical
> hessian
> evaluation to reduce the number of gradient evaluations,etc although usually
> one would expect the lower accuracy of numerical derivatives to slow down
> the convergence (i.e.) more number of iterations. Is something like that
> happening here?
>
> Thanks,
>
> Mahesh
>
> -- 
> Developer
> ASCEND Project
> http://ascendwiki.cheme.cmu.edu
>



More information about the Ipopt mailing list