[Coin-ipopt] step size too small?

Andreas Waechter andreasw at watson.ibm.com
Thu Feb 23 22:15:06 EST 2006


Hi Johannes,

I assume that your problem has bounds on the optimization variables, and 
that the local solution that you have has some components very close or at 
the bounds - is that correct?

This could explain somewhat the behavior you are seeing; keep in mind that 
Ipopt is an interior point method, and it therefore has to always keep its 
iterates away from the bounds, even if they might eventually converge to a 
solution at the boundary.  So, when you give Ipopt a starting point that 
is close to the bound, Ipopt moves it somewhat away before it starts with 
the optimization, see Section 3.6 in the Ipopt implemetation paper,

http://www.research.ibm.com/people/a/andreasw/papers/ipopt.pdf

In other words, warm starts are not necessarily easy to do with an 
interior point method, but also not necessarily impossible.  There are a 
few options that you can play with if you want to reduce the amount of 
movement of the initial point (bound_push and bound_frac), and it might 
also work better if you use the option

mu_strategy adaptive

In addition, if you actually solved the optimization problem before and 
also have the final values for the dual variables available, you could 
provide those to the optimizer and use the warm_start option.  (I'm not 
sure how well that works in the current official release, but a new 
version of Ipopt should come out soon, where we also improved the 
warm_start implementation a little bit.)

Hmmm, now I read your message again, and I'm not even sure I interpreted 
it correctly.  You write that you perturb only one component of your 
optimization variables a little bit, but that Ipopt should move it "about 
1e6 times further to reach the local minimum"...  What absolute numbers 
are we talking about here?  Is the point that Ipopt returns satisfying the 
default tolerances, and it is the objective function value very different 
or almost the same?  Is the returned point close to the local solution you 
provided?  Or are we talking about difference such as 1e-6 versus 1e-12 in 
the component of that variable?  (Again, since Ipopt is an interior point 
method and does not have an active set identification implemented, the 
final point is usually always a little bit from bounds that are active at 
the solution).

If I didn't make sense to you, please include a bit more information...

Regards,

Andreas

On Tue, 21 Feb 2006, Dr. Johannes Zellner wrote:

> Hello,
>
> when I move away my NLP just a little from a known local solution (say I
> move away only one of about 20 parameters very little) ipopt is not able to
> find the minium again. I observed, that ipopt changes the parameters
> then (especially the parameter which was moved) far to little, it should
> move the parameter of interest about 1e6 times further to reach the
> local minimum again. OTOH, with my NLP disturbed in this way (only one
> parameter moved away from a local minimum) other known local optimizers
> (like levenberg-marquart or lbfgsb) find easily the local minimum again.
> What could be the problem here with ipopt?
>
> -- 
> Johannes
> _______________________________________________
> Coin-ipopt mailing list
> Coin-ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/coin-ipopt
>



More information about the Coin-ipopt mailing list