[Coin-ipopt] Convergence and BFGS Hessian computation

Carl Damon Laird claird at andrew.cmu.edu
Thu Oct 5 10:13:25 EDT 2006


Hi Sylvain Miossec, glad to hear IPOPT has been working for you. Sorry for 
the long response, but there are a few things to discuss here.

As for the convergence criteria, you are using:
> tol : 1e-3
> acceptable tol : 3e-3
> expect_infeasible_problem : yes
> dual_inf_tol : 1e-6
> compl_inf_tol : 1e-6
> acceptable_dual_inf_tol : 3e-3
> acceptable_compl_inf_tol : 3e-3

You have correctly seen that IPOPT has two levels of convergence criteria, 
standard and acceptable. Even if the standard tolerance criteria are not 
satisfied, if the acceptable criteria are satisfied for a certain number 
of iterations, then the problem converges with an "acceptable" solution.

Is your problem converging with an acceptable solution?

Also, the parameters tol and acceptable_tol are relative tolerances (they 
are scaled as per the paper), while the dual_inf_tol, compl_inf_tol, and 
constr_viol_tol (as well as the acceptable versions) are all absolute 
tolerances. IPOPT only converges when both the relative and the absolute 
are satisfied. I notice that you have actually tightened the absolute 
tolerances (to 1e-6 from 1e-4) If you have a problem that is not ideally 
scaled, the absolute tolerances will be difficult to achieve. In some 
cases, when problems have had poor scaling, we have greatly loosened the 
absolute tolerances to enable convergence. Of course, changing the problem 
scaling would be better.

You might try increasing these absolute tolerances. Also, is there any 
reason why you have changed the absolute tolerance for dual infeasibility 
and complementarity, but not for constraint violation (constr_viol_tol)? I 
would try increasing all three of these and seeing if that helps.

One final comment. If the problem is poorly scaled or nearly singular, the 
multipliers may be converging to very large values. This will, in turn, 
cause the dual infeasibility to be large. Fixing the scaling of the 
problem can help greatly.

As far as the L-BFGS versus the BFGS I don't have too much to say except 
that the C++ version of IPOPT does not have a BFGS reduced space option 
like the Fortran version. Therefore, I think L-BFGS is your only choice 
with C++ IPOPT right now.

Hope this helps and let us know if you have further difficulties,

Cheers,

Carl.


On Thu, 5 Oct 2006, Sylvain Miossec wrote:

> Hi all,
>
> I am using IPOPT for a year now and I found it quite fast for the type
> of problems I solve (robot motion optimization). Thanks a lot for the
> good job.
>
> But I have one problem is the proper convergence of the optimization.
> Now I used to stop the optimization by reaching the maximum number of
> iterations. And then checking that the criteria is not changing a lot
> during the last iterations, and that the constraints are well satisfied.
> The motions I obtain for the robot look also optimal.
>
> Does anyone would have an idea why I don't obtain proper convergence ?
>
> I provide two files (in a zip compressed file) of some runs of optimization 
> for two different history of BFGS approximation.
>
> Since the Hessian computation is very complicate to program in my case,
> I use the limited-memory BFGS approximation. I tried a history of 6 for
> the BFGS computation and 131 (the size of my problem).  The BFGS
> computation is slower with a 131 history but the convergence is better.
> I have another question then : would it be better for my problem size to
> use a BFGS approximation (not limited-memory one) ?
>
> the parameters I used for IPOPT are the following :
> tol : 1e-3
> acceptable tol : 3e-3
> max_iter : 300/500
> expect_infeasible_problem : yes
> hessian_approximation : yes
> limited_memory_max_history : 131/6
> dual_inf_tol : 1e-6
> compl_inf_tol : 1e-6
> acceptable_dual_inf_tol : 3e-3
> acceptable_compl_inf_tol : 3e-3
> check_derivatives_for_naninf : yes
> derivative_test : first-order
> alpha_for_y : full
>
> During the optimization, the dual infeasibility is not really decreasing
> (and not reaching the stopping tolerance) while the criteria seems to
> converge. Is it normal ? Could it be due to a scaling problem (I scaled
> the problem myself by trial and error). Could it be due to the Hessian
> approximation ? Perhaps it is another reason for using BFGS
> approximation instead of limited-memory one. Or perhaps I should try the
> real hessian computation ? (but I would like to be sure it would solve
> my problem, since it will take lots of time to implement).
>
> An help on this point would be greatly appreciated.
>
> Best regards
>
> Sylvain Miossec
>
>



More information about the Coin-ipopt mailing list