[Ipopt] Very slow convergence

Sebastian Walter walter at mathematik.hu-berlin.de
Fri May 15 04:47:37 EDT 2009


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello,

I also had problems with slow convergence when I tried to optimize an
objective function with a DAE constraint.
As far as I know, IP methods generally have troubles with inexact Hessians which results in slow convergence near the solution point.

What do you mean by "the hessian even with an automatic equations and code generator" is too costly?
Cost as in time to implement or as in runtime on the computer?
Usually, using Automatic Differentiation doesn't take too much time and the performance is also good: certainly faster than using BFGS.

If you cant provide exact second derivatives: Have you tried using an SQP method? In my experience, SQP methods often work quite well with inexact Hessians.


Sebastian





Nicolas Gachadoit schrieb:
> Hi Andreas,
> 
> I use the derivative checker for the gradient, it is OK. I didn't
> implement the hessian, I use the limited_memory option (I have about 400
> variables and 400 very complex and non-linear constraints, implementing
> the hessian even with an automatic equations and code generator is too
> costly).
> After some tests, it appears that I have an improvement if I change the
> initial point. Up to now, the initial point had a lot of values close to
> zero. It seems that the result of the scaling from this initial point
> has some influence on the convergence. For example, variables
> corresponding to commands were initialized at 0., but in fact the result
> of the optimization should be -/+ 200000.
> Maybe I'm wrong, but I have the impression that:
> small (and not realistic) initial values -> high scaling factors -> slow
> convergence
> 
> Thanks,
> 
> Nicolas
> 
> Andreas Waechter a écrit :
>> Hi Nicolas,
>>
>> Even 2000 iterations sound like a lot.  If you problem has hightly
>> nonlinear constraints you might see a lot of jumps to the restoration
>> phase, and you might want to experiment with using different
>> formulations of the constraints.  (In general, a modeing language like
>> AMPL and GAMS are very handy for this, before you sit down and write
>> matlab code...)
>>
>> But maybe the issue is just that your Hessian is not implemented
>> correctly.  Did you verify them with the derivative checker?
>>
>> Hope this helps,
>>
>> Andreas
>>
>> On Wed, 13 May 2009, Nicolas Gachadoit wrote:
>>
>>> Hello,
>>>
>>> I use Ipopt for Optimal Control (minimum time control) and in one of
>>> my applications, the convergence is very slow.
>>> It is a robotic application (4 dof), the equations (constraints and
>>> gradient of constraints, automatically generated by Maple) are very
>>> big so it could be the reason but on another hand, in another
>>> application (5 dof), the convergence is fast (< 2000 iterations, less
>>> than 2 minutes).
>>> In this application (4 dof), I tried up to 20000 iterations and it
>>> did not converge yet. Each time I increase max_iter, it is better
>>> (the minimum time decreases and the controls are closer to
>>> saturations) so a possibility would be to try to put a very high
>>> max_iter and wait for a few hours.
>>> But I would like to know if another option could make the convergence
>>> faster. Maybe it is a problem of scaling or something else ?
>>>
>>> Thanks in advance,
>>> Best regards,
>>>
>>> Nicolas Gachadoit
>>>
>>
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.4-svn0 (GNU/Linux)
Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org

iD8DBQFKDSwp9PBA5IG0h0ARAlqZAJ9jLlZ2xJVnk8C8z6TANg0bn9r99ACdHsQ0
xncPm9hudN2OX9qV8fdNGko=
=rKKz
-----END PGP SIGNATURE-----



More information about the Ipopt mailing list