[Ipopt] Very slow convergence

Andreas Waechter andreasw at watson.ibm.com
Sun May 17 19:32:34 EDT 2009


Hi Nicolas,

I hadn't realized that you use the quasi-Newton option for the Hessian. 
In that case, Ipopt is not as robust and might indeed take many 
iterations.  One thing that Ipopt is not good at in this case is figuring 
you that it converged: Even close to the solution (small primal 
infeasibility and small changes in the objective function) the dual 
infeasibility might still be large because the multipliers are converging 
very slowly.  In this case I usually play with the acceptable_* options, 
which allow you to specify looser convergence tolerances, also based on 
the change of the objective function from one iteration ot the next.

And, yes, it is definitely a good idea to experiment with the starting 
point.

Regards,

Andreas

On Thu, 14 May 2009, Nicolas Gachadoit wrote:

> Hi Andreas,
>
> I use the derivative checker for the gradient, it is OK. I didn't implement 
> the hessian, I use the limited_memory option (I have about 400 variables and 
> 400 very complex and non-linear constraints, implementing the hessian even 
> with an automatic equations and code generator is too costly).
> After some tests, it appears that I have an improvement if I change the 
> initial point. Up to now, the initial point had a lot of values close to 
> zero. It seems that the result of the scaling from this initial point has 
> some influence on the convergence. For example, variables corresponding to 
> commands were initialized at 0., but in fact the result of the optimization 
> should be -/+ 200000.
> Maybe I'm wrong, but I have the impression that:
> small (and not realistic) initial values -> high scaling factors -> slow 
> convergence
>
> Thanks,
>
> Nicolas
>
> Andreas Waechter a écrit :
>> Hi Nicolas,
>> 
>> Even 2000 iterations sound like a lot.  If you problem has hightly 
>> nonlinear constraints you might see a lot of jumps to the restoration 
>> phase, and you might want to experiment with using different formulations 
>> of the constraints.  (In general, a modeing language like AMPL and GAMS are 
>> very handy for this, before you sit down and write matlab code...)
>> 
>> But maybe the issue is just that your Hessian is not implemented correctly. 
>> Did you verify them with the derivative checker?
>> 
>> Hope this helps,
>> 
>> Andreas
>> 
>> On Wed, 13 May 2009, Nicolas Gachadoit wrote:
>> 
>>> Hello,
>>> 
>>> I use Ipopt for Optimal Control (minimum time control) and in one of my 
>>> applications, the convergence is very slow.
>>> It is a robotic application (4 dof), the equations (constraints and 
>>> gradient of constraints, automatically generated by Maple) are very big so 
>>> it could be the reason but on another hand, in another application (5 
>>> dof), the convergence is fast (< 2000 iterations, less than 2 minutes).
>>> In this application (4 dof), I tried up to 20000 iterations and it did not 
>>> converge yet. Each time I increase max_iter, it is better (the minimum 
>>> time decreases and the controls are closer to saturations) so a 
>>> possibility would be to try to put a very high max_iter and wait for a few 
>>> hours.
>>> But I would like to know if another option could make the convergence 
>>> faster. Maybe it is a problem of scaling or something else ?
>>> 
>>> Thanks in advance,
>>> Best regards,
>>> 
>>> Nicolas Gachadoit
>>> 
>> 
>


More information about the Ipopt mailing list