[Ipopt] improving IPOPT speed with Algorithmic DifferentiationTheory
Sebastian Walter
walter at mathematik.hu-berlin.de
Thu Sep 18 09:50:27 EDT 2008
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hey Brad!
Nice to see you here. Remember me? We talked at the AD conference in Bonn ;)
An interface doesn't help us here though. That has two reasons:
1) We also use other optimizers (SNOPT,...)
2) Our objective function depends on the solution of a DAE:
min_q Phi( x(T,q), x_q(T,q)),
s.t. x(T,q) solves
dx/dt = f(t,x,q)
x = g(t,x,q)
where
Phi is the objective function
x(T,q) the solution of the DAE at time T
x_q(T,q) is short notation for dx/dq(T,q)
q is a control variable
f is the differential rhs of the DAE
g is the algebraic rhs of the DAE
At the moment we use forward tangent propagation like this:
1) differentiate the rhs of the DAE f and g with ADIFOR
2) then forward taylor propagation through an DAE integrator (DAESOL II)
3) then forward taylor propagation of Phi by hand
So it's not that straightforward to use an AD tool since there is also
lots of legacy fortran code in there.
Brad Bell schrieb:
> Are you using algorithmic differentiation to compute your derivatives
> for Ipopt ? I have been working on an interface to do this. See
> http://www.coin-or.org/CppAD/Doc/ipopt_cppad_nlp.xml
>
> Sebastian Walter wrote:
> Hello everyone,
>
> We are working on a software project called VPLAN which computes optimal
> experimental designs.
>
> At the moment we use SNOPT for the optimization. However, SNOPT is
> proprietary and therefore we are looking for good alternatives ;)
>
> We have already successfully incorporated IPOPT. The optimization works
> and gives the same results as SNOPT.
>
> However, for our test examples, SNOPT clearly outperforms IPOPT w.r.t
> function evaluations until convergence.
>
> Well, so we'd like to speed up IPOPT a little bit.
>
> We noticed that often the following happens in IPOPT
> ...
> eval_f(x_13)
> eval_grad_f(x_13)
> eval_f(x_14)
> eval_grad_f(x_14)
> eval_f(x_15)
> eval_grad_f(x_15)
>
> The Algorithmic Differentation theory tells us, that we get the function
> for free when we evaluate the gradient.
> All we need is some possibility to cache the redundant computations.
>
> Is there an easy way to do that in IPOPT?
>
>
> best regards,
> Sebastian Walter
>
>
>
>
>
>
>
>
>
>
_______________________________________________
Ipopt mailing list
Ipopt at list.coin-or.org
http://list.coin-or.org/mailman/listinfo/ipopt
>>
>>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.4-svn0 (GNU/Linux)
Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org
iD8DBQFI0lyj9PBA5IG0h0ARAus6AKCAkLmRVcBvLHmRKPI4J+ko/8RFnACeKDR1
TIFKy6bP4nzUZCEEBJJqPz4=
=L8Ik
-----END PGP SIGNATURE-----
More information about the Ipopt
mailing list