[Ipopt] Questions on "Total CPU secs in NLP function evaluations"

Yao Xie xieyao04 at gmail.com
Thu Oct 3 11:54:46 EDT 2013


Thanks so much! It really helps.

I thought AMPL does not support print_timing_statistics and approximating
hessian matrix  http://www.coin-or.org/Ipopt/documentation/node62.html

Anyway, I turned on the option to print timing statistics and found that
the time was mostly spent on Lagrangian Hessian calculation. Then I tried
to approximate hessian matrix but the optimization didn't converge after
700 iterations.

The objective function I had was the normalization of a double sum:
*\sum_k(\sum_i
(A_i,k * x - b)^2)^2* where A_i,k is the row i of a large but sparse matrix
j.  k has order 10, i has order 10^6, x has length 10^3. Only one
constraint is nonlinear but it has much less complexity than the obj
function. Time needed for computing Hessian matrix is 300 seconds. If I
remove the outer sum, hessian matrix calculation costs 30 seconds,
consistent with the order of k (300/10=30). I also made intermediate
variables for *\sum_i (A_i,k * x-y)^2) *but the time difference between
using intermediate variables and not using them was negligible.

Given the complexity of this obj function, does hessian matrix evaluation
have to be slow? Parallelization is the only option? Actually the order of
k needs to be even larger, like 100. Thanks.

Yao


On Tue, Oct 1, 2013 at 6:52 PM, Tony Kelman <kelman at berkeley.edu> wrote:

>   1. Yes, you’re correct that NLP function evaluations means objective,
> gradient, Jacobian, and Hessian functions. These are considered
> user-defined functions, although when using the AMPL interface to Ipopt it
> is that AMPL interface code that is responsible for evaluating these
> functions. The inside-Ipopt time is primarily the time spent in the linear
> solver (ma27 in your example data), solving the KKT system to determine the
> Newton step at each iteration, potentially multiple times per iteration if
> Hessian regularization is required, then a bit more cheap math for the line
> search. You can get a more detailed breakdown of where the time is being
> spent by setting the option print_timing_statistics to yes.
>
> 2. Conceptually these should be parallelizable since it’s just function
> evaluation and each element of the vector and sparse matrix quantities only
> depends on the current primal-dual evaluation point, not the other
> vector/matrix elements. Since you’re using the AMPL interface I’m not sure
> how easy this is to implement in practice. I don’t believe the AMPL
> interface code is written in a way that could be easily parallelized, but I
> could be wrong here. How complicated would your problem be to implement in
> C++ or something you have more control over? I believe some of the
> auto-differentiation tools should support some form of parallel evaluation?
>
> I find it a bit surprising to see an AMPL model taking almost 4 times
> longer in the function evaluations than the linear solver, I usually expect
> the linear solver time to dominate for most problems (unless you’re using
> an interface like Matlab or Python where the user functions might be
> written as slow interpreted code). Does your problem have a large number of
> complicated nested nonlinearities or discontinuities or something else that
> may be giving AMPL trouble? Look at the more detailed timing statistics to
> see which of the functions is taking most of the time, if the Hessian is
> taking an inordinate amount of time you could try the quasi-Newton
> limited-memory Hessian option by setting the hessian_approximation option
> to limited-memory. Otherwise is there any type of reformulation you could
> do, potentially introducing more variables but making the problem sparser
> or consolidating some of the more difficult nonlinearities into fewer
> variables or constraint elements?
>
> -Tony
>
>
>  *From:* Yao Xie <xieyao04 at gmail.com>
> *Sent:* Tuesday, October 01, 2013 1:43 PM
> *To:* ipopt at list.coin-or.org
> *Subject:* [Ipopt] Questions on "Total CPU secs in NLP function
> evaluations"
>
>  Hi all,
>
> I couldn't find documentation to explain the difference of "Total CPU secs
> in IPOPT (w/o function evaluations)" and "Total CPU secs in NLP function
> evaluations". I was using AMPL interface and got a very long function
> evaluation time (details below).
>
> My questions are:
> 1) What's the difference between these two CPU time? Does "NLP function
> evaluations" mean to evaluate objective function values, Jacobian, Hessian
> matrix? Which parts are in IPOPT and which are outside IPOPT?
> 2) Is there any way to speed up this NLP function evaluation part? It's
> just evaluation, can we parallelize it? Thanks!
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/ipopt/attachments/20131003/c35dd248/attachment.html>


More information about the Ipopt mailing list