[Coin-ipopt] IPopt performances

Fournie Laurent laurent.fournie at momagroup.com
Mon Jan 14 12:12:34 EST 2008


Hello,

Thank you for your explicit answers. This confirms what I was expecting 
(and saves me a lot of time).
Our discussion is below, for the mailing list.

Laurent



Sorry for taking a while to respond - I was on vacation.  In future, 
please use the mailing list, as Prof. Biegler had suggested, since other 
Ipopt users might have information and/or be interested in the responses 
to your question.

Prof. Biegler already replied to most of your questions, and I have 
nothing new to add.

I just wanted to clarify one point:

>> I have some questions on computation and convergence time:
>>
>> - IPOPT claims that less than 10% of CPU time is spent in IPopt 
>> computation (w/o function evaluations)
>> while my profiling tool claims it is more than 75% ! Do you know 
>> where this difference comes from ?
>

Ipopt calls the "function evaluation" methods from within Ipopt code.  
If your profiler tells you how much time is spend in Ipopt's Optimize 
method, this will include the function evaluation time.

Ipopt keeps track of how much time is spent in the function it calls 
that determine the values and derivatives of the problem functions, and 
when reporting the time in Ipopt (w/o function evaluations), it 
substracts the this time from the total time spent inside the Optimize 
method.  I hope that explains the difference you see in your timing 
results.

I hope this helps,

Andreas

Andreas Waechter
IBM T.J. Watson Research Center
P.O. Box 218 (Rt. 134)
Yorktown Heights, NY 10598
USA




> - I have the following problem :
>   min_x  f (g (x))
>   with h(x) <= A
>          k(g(x)) <= B
> is it more time consuming to use the following formulation, easier to 
> implement
> (the number of non-zero in the hessian and jacobian are roughly the 
> same) :
>   min_{x,y}  f (y)
>   with y = g(x)
>          h(x) <= A
>          k(y) <= B
>
> - same question in terms of convergence time ?
>

The second formulation will promote sparsity and will probably lead to 
better performance. A trick long used by practitioners is to split up 
the equations so that only a few variables appear in each equation. This 
of course makes the problem larger but has the advantage of coming up 
with a better pivot sequence (in terms of conditioning and matrix 
fill-in). Moreover, through the judicious use of variable bounds (and 
also careful variable initialization), the effect nonlinearities can be 
controlled more effectively.


> - if I know that for the optimal solution, x <= C; it is interesting 
> to add the constraint x <= C ?
>

See comment above. As long as a careful initialization is done (e.g., x 
is set <= C), this can help performance.

Hope this helps,

Larry Biegler




More information about the Coin-ipopt mailing list