[Ipopt] improving IPOPT speed with AlgorithmicDifferentiationTheory
Andreas Waechter
andreasw at watson.ibm.com
Fri Sep 19 10:23:51 EDT 2008
Hi Sebastian,
1. The caching in Ipopt is done for every individually requested piece of
information (every eval_* call) separately. What it means is that we
don't call eval_f twice with the same x values.
The your understanding of the new_x flag is correct. This allows you to
konw when x is changing, and you can then decide what to compute when what
funciton is call, and what you want to cache. You can also use it to
compute only eval_g and eval_jac_g everytime together if that is what you
want to do, and do no caching for any other call. You have all the
information you need, and just need to write the cod for that. There is
no need to have Ipopt reverse the order of the calls. It requests the
information really in the order it needs it.
2. I do not believe that SNOPT knows in advance if a trial point is going
to be accepted, so it can't know if it needs grad_f when it also asks for
f in advance. The order in which the information is required by
optimization algoritms is usually the one you see for Ipopt.
If you want to know more about optimization algorithms, you could look at
a text book (such as "Numerical Optimization" by Nocedal and Wright, but
there are many others), or if you want to know more about the Ipopt
algorithm, you can look at the corresponding paper,
http://www.research.ibm.com/people/a/andreasw/papers/ipopt.pdf
Regards,
Andreas
On Fri, 19 Sep 2008, Brad Bell wrote:
> As I understand AD theory, and this is for certain true with CppAD and
> Adolc, each time you evaluate f(x) you can retain the forward mode results.
> Then, if grad_f is requested at the same point, you only need execute a
> reverse mode sweep to obtain grad_f (using the results of the forward mode
> sweep at the same point).
>
> Many line search methods only use function values and it is only at the
> final line search point that gradient values are required. But this is only
> known after inspecting the function value.
>
> Thus, the use of the new_x flag as defined by Ipopt makes sense to me.
>
>
> Sebastian Walter wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hello Andreas, hello Andrey,
>>
>> thanks for your detailed replies :)
>> I think we are talking about different things:
>> Caching and Algorithmic Differentiation (AD)
>>
>> My question was actually about AD. But now that you explained the
>> caching in more detail, I am not so sure I had fully understood the idea
>> of the new_x flag.
>>
>> 1) === The caching
>> Wasn't there a remark in the IPOPT documentation that IPOPT does this
>> caching automatically? I'm not sure what calculate() and cache()
>> really do.
>> Did I get this right?
>> - -- The caching is performed automatically by IPOPT for each f, grad_f,
>> g, jac_g separately.
>> - -- but sometimes it is very cheap to compute f,grad_f, g, jac_g,... at
>> once. To do this all-at-once when a new x has been computed, the new_x
>> flag has been introduced.
>>
>> 2) the Algorithmic Differentiation Theory
>>
>> Well, what I meant to say was:
>> In SNOPT you specify the objective function similar to this:
>>
>> objective_function( int mode, double *x, double *f, double *grad_f);
>>
>> So there is only one function for BOTH the gradient and the function.
>> If mode = 0: only f is evaluated
>> if mode = 1: only grad_f is evaluated
>> if mode = 2: both grad_f and f are evaluated
>>
>> SNOPT knows when it is going to need both grad_f and f and then calls
>> the objective_function with mode==2.
>> Since during the computation of grad_f one always implicitly computes f,
>> one can save some function calls.
>>
>> Now I don't really know how IPOPT works internally, but there should be
>> many cases when at a point x both the function and its gradient are
>> needed. E.g. in the first step of the line search.
>> And this is what I also observed when I kept track of the updates of x.
>> The call sequence looks like this:
>>
>> eval_f(x_k)
>> eval_grad_f(x_k)
>> eval_f(x_k+1)
>> eval_grad_f(x_k+1)
>>
>> one could save the function calls:
>> eval_f_and_grad_f_at_once(x_k)
>> eval_f_and_grad_f_at_once(x_k+1)
>>
>> Our function evaluations are pretty expensive, that's why it would be
>> nice to save those redundant calls.
>>
>> What I was trying to explain before was that I tried to use new_x as a
>> workaround, but i realized that my workaround would only work when the
>> call sequence looked like
>>
>> eval_grad_f(x_k)
>> eval_f(x_k)
>> eval_grad_f(x_k+1)
>> eval_f(x_k+1)
>>
>> so it's the wrong order.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v2.0.4-svn0 (GNU/Linux)
>> Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org
>>
>> iD8DBQFI02Yu9PBA5IG0h0ARAsaeAJ9Pammbqay8EyagLWFHpJlrWPxY5gCfSRiP
>> uWjZT/jmE7E9zJU7I5c9gs0=
>> =lm6O
>> -----END PGP SIGNATURE-----
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt
>>
>>
>>
>
>
More information about the Ipopt
mailing list