[Ipopt] Very slow convergence
Nicolas Gachadoit
n.gachadoit at infonie.fr
Fri May 15 06:13:13 EDT 2009
Hello Sebastian,
Thanks for the tips.
I meant too costly to implement. When I compute the hessian in Maple and
then generate automatically the code, it takes too much time. Of course,
it depends on what "too much time" mean, but my aim is to develop a full
integrated and without any programmation solution for the user. On the
prototype I'm working on, if you start from a MapleSim model (you
graphically put components and link them with the mouse in a design
pane) you can generate automatically the equations (10 seconds) and then
after defining initial conditions, bounds, and so on in Maple you launch
the function I'm developping and you get the result a few minutes later
(17 minutes on a 5DOF robot for example). 17 minutes = 15 minutes to
generate the code of the Jacobian and 2 minutes in Ipopt. It is
acceptable for the user, considering that the only "manual" thing he has
to do is the graphical model of the system. If I make Maple generate the
code of the Hessian, 15 minutes becomes a few hours, I consider that it
is too much, even if hours are less than days or weeks if the user has
to write equations or code manually.
I didn't consider online automatic differentiation yet but it may be an
option, especially if the convergence is faster than if I use the Newton
approximation of the derivatives.
I tried at the very beginning an SQP method without success. I quickly
preferred to use Ipopt for a lot of reasons. And considering the time I
spent developping the interface with Maple, I will continue to use it ;-)
Thanks,
Nicolas
Sebastian Walter a écrit :
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hello,
>
> I also had problems with slow convergence when I tried to optimize an
> objective function with a DAE constraint.
> As far as I know, IP methods generally have troubles with inexact Hessians which results in slow convergence near the solution point.
>
> What do you mean by "the hessian even with an automatic equations and code generator" is too costly?
> Cost as in time to implement or as in runtime on the computer?
> Usually, using Automatic Differentiation doesn't take too much time and the performance is also good: certainly faster than using BFGS.
>
> If you cant provide exact second derivatives: Have you tried using an SQP method? In my experience, SQP methods often work quite well with inexact Hessians.
>
>
> Sebastian
>
>
>
>
>
> Nicolas Gachadoit schrieb:
>
>> Hi Andreas,
>>
>> I use the derivative checker for the gradient, it is OK. I didn't
>> implement the hessian, I use the limited_memory option (I have about 400
>> variables and 400 very complex and non-linear constraints, implementing
>> the hessian even with an automatic equations and code generator is too
>> costly).
>> After some tests, it appears that I have an improvement if I change the
>> initial point. Up to now, the initial point had a lot of values close to
>> zero. It seems that the result of the scaling from this initial point
>> has some influence on the convergence. For example, variables
>> corresponding to commands were initialized at 0., but in fact the result
>> of the optimization should be -/+ 200000.
>> Maybe I'm wrong, but I have the impression that:
>> small (and not realistic) initial values -> high scaling factors -> slow
>> convergence
>>
>> Thanks,
>>
>> Nicolas
>>
>> Andreas Waechter a écrit :
>>
>>> Hi Nicolas,
>>>
>>> Even 2000 iterations sound like a lot. If you problem has hightly
>>> nonlinear constraints you might see a lot of jumps to the restoration
>>> phase, and you might want to experiment with using different
>>> formulations of the constraints. (In general, a modeing language like
>>> AMPL and GAMS are very handy for this, before you sit down and write
>>> matlab code...)
>>>
>>> But maybe the issue is just that your Hessian is not implemented
>>> correctly. Did you verify them with the derivative checker?
>>>
>>> Hope this helps,
>>>
>>> Andreas
>>>
>>> On Wed, 13 May 2009, Nicolas Gachadoit wrote:
>>>
>>>
>>>> Hello,
>>>>
>>>> I use Ipopt for Optimal Control (minimum time control) and in one of
>>>> my applications, the convergence is very slow.
>>>> It is a robotic application (4 dof), the equations (constraints and
>>>> gradient of constraints, automatically generated by Maple) are very
>>>> big so it could be the reason but on another hand, in another
>>>> application (5 dof), the convergence is fast (< 2000 iterations, less
>>>> than 2 minutes).
>>>> In this application (4 dof), I tried up to 20000 iterations and it
>>>> did not converge yet. Each time I increase max_iter, it is better
>>>> (the minimum time decreases and the controls are closer to
>>>> saturations) so a possibility would be to try to put a very high
>>>> max_iter and wait for a few hours.
>>>> But I would like to know if another option could make the convergence
>>>> faster. Maybe it is a problem of scaling or something else ?
>>>>
>>>> Thanks in advance,
>>>> Best regards,
>>>>
>>>> Nicolas Gachadoit
>>>>
>>>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt
>>
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.4-svn0 (GNU/Linux)
> Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org
>
> iD8DBQFKDSwp9PBA5IG0h0ARAlqZAJ9jLlZ2xJVnk8C8z6TANg0bn9r99ACdHsQ0
> xncPm9hudN2OX9qV8fdNGko=
> =rKKz
> -----END PGP SIGNATURE-----
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://list.coin-or.org/pipermail/ipopt/attachments/20090515/b4f224b8/attachment.html
More information about the Ipopt
mailing list