[Ipopt] Question about CPU time in NLP function evaluations
Stefan Vigerske
stefan at math.hu-berlin.de
Sat Mar 23 07:51:58 EDT 2013
Hi,
On 03/22/2013 02:38 PM, Ahn, Tae-Hyuk wrote:
> Hello Stefan,
>
> Thank you very much for your fast response. I really appreciate it.
>
> Yes, the hessian is very very dense. I have more questions followed your answers.
>
> 1. You mentioned "use of the new_x flag that is passed to the evaluation routines" to check where my code spends so much time. Can you explain this more in detail (a little bit) how I can put the flag?
The eval_* functions that you implement are called with a boolean new_x.
If that is set to false, then some of the evaluation functions has
previously been called for this x. That may allow you to avoid duplicate
work.
> 2. I am just wondering there is a option for "ipopt.opt" to disable some function of "function evaluations". I didn't look at the inside IPOPT codes yet. Should I change the codes directly if I want to reduce some steps of function evaluation?
Ipopt relies on evaluating your functions, there is no way to disable
this. But you could enable the limited memory hessian approximation to
avoid having to implement eval_h.
> 3. Do you think that I can get more efficiency using parallel linear solver. e.g., Pardiso? We have a plan to use MPI to run the binary many times for different models. IPOPT manual recommends that MUMPS with MPI cannot be worked correctly. Do you have any suggestion or recommendation for multithreads with MPI?
You can get more efficient if you can reduce the time for your function
evaluations. You should know best whether they can be parallelized.
Using a more efficient linear solver inside Ipopt will not help you,
since the majority of the time is spend outside Ipopt.
Note, that by setting the option
print_timing_statistics yes
you get more detailed information on where time was spend.
Stefan
>
> Thank you very much.
>
> Ted
>
> ________________________________________
> From: Stefan Vigerske [stefan at math.hu-berlin.de]
> Sent: Friday, March 22, 2013 6:02 AM
> To: Ahn, Tae-Hyuk
> Cc: ipopt at list.coin-or.org; ahn.no1 at gmail.com
> Subject: Re: [Ipopt] Question about CPU time in NLP function evaluations
>
> Hi,
>
> On 03/21/2013 08:23 PM, Ahn, Tae-Hyuk wrote:
>> Hello All,
>>
>> I am a new user and fan of IPOPT.
>
> Welcome.
>
>> I have a question about CPU time for NLP function evaluations.
>>
>> My problem has dynamic variables with complex objective function. Below is the variable info of one example.
>>
>> ---------------------------------------------------------------------------
>> Number of nonzeros in equality constraint Jacobian...: 1291
>> Number of nonzeros in inequality constraint Jacobian.: 0
>> Number of nonzeros in Lagrangian Hessian.............: 833986
>
> So your hessian is indeed completely dense (all elements are sometimes
> nonzero)?
>
>> Total number of variables............................: 1291
>> variables with only lower bounds: 0
>> variables with lower and upper bounds: 1291
>> variables with only upper bounds: 0
>> Total number of equality constraints.................: 1
>> Total number of inequality constraints...............: 0
>> inequality constraints with only lower bounds: 0
>> inequality constraints with lower and upper bounds: 0
>> inequality constraints with only upper bounds: 0
>> ---------------------------------------------------------------------------
>>
>> After IPOPT solves the problem, I satisfy the results. The problem is, however, elapsed time.
>>
>> ---------------------------------------------------------------------------
>> Number of Iterations....: 22
>> Total CPU secs in IPOPT (w/o function evaluations) = 60.527
>> Total CPU secs in NLP function evaluations = 12798.083
>> ---------------------------------------------------------------------------
>>
>> As you see that, it took 3-4 hours to solve this problem. Especially, "NLP function evaluations" tool all of time.
>>
>> Let's assume that f, grad_f, g, jac_g, and h are already optimized (that means I don't want to change them). How can I reduce the elapsed time? Can I "turn off" the step "NLP function evaluation"?
>
> If you mean that you want to keep objective and constraints fixed, then
> there isn't much left to optimize. That would indeed remove the need to
> evaluate your functions, but it raises the question why to run Ipopt at all.
>
> The time spend in function evaluations is in the users responsibility.
> You may want to check where your code spends so much time. If not done
> yet, then you may want to make use of the new_x flag that is passed to
> the evaluation routines.
>
> Stefan
>
>>
>> If you have any suggestion, please let me know.
>>
>> Thank you very much!
>>
>> Sincerely,
>>
>> Ted
>>
>>
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt
>>
>
More information about the Ipopt
mailing list