[Ipopt] Question about CPU time in NLP function evaluations

Ahn, Tae-Hyuk ahnt at ornl.gov
Fri Mar 22 09:38:40 EDT 2013

Hello Stefan,

Thank you very much for your fast response. I really appreciate it.

Yes, the hessian is very very dense. I have more questions followed your answers.

1. You mentioned "use of the new_x flag that is passed to the evaluation routines" to check where my code spends so much time. Can you explain this more in detail (a little bit) how I can put the flag?

2. I am just wondering there is a option for "ipopt.opt" to disable some function of "function evaluations". I didn't look at the inside IPOPT codes yet. Should I change the codes directly if I want to reduce some steps of function evaluation?

3. Do you think that I can get more efficiency using parallel linear solver. e.g., Pardiso? We have a plan to use MPI to run the binary many times for different models. IPOPT manual recommends that MUMPS with MPI cannot be worked correctly. Do you have any suggestion or recommendation for multithreads with MPI? 

Thank you very much.


From: Stefan Vigerske [stefan at math.hu-berlin.de]
Sent: Friday, March 22, 2013 6:02 AM
To: Ahn, Tae-Hyuk
Cc: ipopt at list.coin-or.org; ahn.no1 at gmail.com
Subject: Re: [Ipopt] Question about CPU time in NLP function evaluations


On 03/21/2013 08:23 PM, Ahn, Tae-Hyuk wrote:
> Hello All,
> I am a new user and fan of IPOPT.


> I have a question about CPU time for NLP function evaluations.
> My problem has dynamic variables with complex objective function. Below is the variable info of one example.
> ---------------------------------------------------------------------------
> Number of nonzeros in equality constraint Jacobian...:     1291
> Number of nonzeros in inequality constraint Jacobian.:        0
> Number of nonzeros in Lagrangian Hessian.............:   833986

So your hessian is indeed completely dense (all elements are sometimes

> Total number of variables............................:     1291
>                       variables with only lower bounds:        0
>                  variables with lower and upper bounds:     1291
>                       variables with only upper bounds:        0
> Total number of equality constraints.................:        1
> Total number of inequality constraints...............:        0
>          inequality constraints with only lower bounds:        0
>     inequality constraints with lower and upper bounds:        0
>          inequality constraints with only upper bounds:        0
> ---------------------------------------------------------------------------
> After IPOPT solves the problem, I satisfy the results. The problem is, however, elapsed time.
> ---------------------------------------------------------------------------
> Number of Iterations....: 22
> Total CPU secs in IPOPT (w/o function evaluations)   =     60.527
> Total CPU secs in NLP function evaluations           =  12798.083
> ---------------------------------------------------------------------------
> As you see that, it took 3-4 hours to solve this problem. Especially, "NLP function evaluations" tool all of time.
> Let's assume that f, grad_f, g, jac_g, and h are already optimized (that means I don't want to change them). How can I reduce the elapsed time? Can I "turn off" the step "NLP function evaluation"?

If you mean that you want to keep objective and constraints fixed, then
there isn't much left to optimize. That would indeed remove the need to
evaluate your functions, but it raises the question why to run Ipopt at all.

The time spend in function evaluations is in the users responsibility.
You may want to check where your code spends so much time. If not done
yet, then you may want to make use of the new_x flag that is passed to
the evaluation routines.


> If you have any suggestion, please let me know.
> Thank you very much!
> Sincerely,
> Ted
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt

More information about the Ipopt mailing list