[Ipopt] improving IPOPT speed with Algorithmic Differentiation Theory

Sebastian Walter walter at mathematik.hu-berlin.de
Thu Sep 18 05:23:26 EDT 2008


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello everyone,

We are working on a software project called VPLAN which computes optimal
experimental designs.

At the moment we use SNOPT for the optimization.  However, SNOPT is
proprietary and therefore we are looking for good alternatives ;)

We have already successfully incorporated IPOPT. The optimization works
and gives the same results as SNOPT.

However, for our test examples, SNOPT clearly outperforms IPOPT w.r.t
function evaluations until convergence.

Well, so we'd like to speed up IPOPT a little bit.

We noticed that often the following happens in IPOPT
...
eval_f(x_13)
eval_grad_f(x_13)
eval_f(x_14)
eval_grad_f(x_14)
eval_f(x_15)
eval_grad_f(x_15)

The Algorithmic Differentation theory tells us, that we get the function
for free when we evaluate the gradient.
All we need is some possibility to cache the redundant computations.

Is there an easy way to do that in IPOPT?


best regards,
Sebastian Walter










-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.4-svn0 (GNU/Linux)
Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org

iD8DBQFI0h4O9PBA5IG0h0ARAqfJAJsEAr8sHeIm10/RuIVdYCh2RB3kTgCghkKZ
Rw1F+Hl0gHDrXReyw1Qu6bg=
=51kN
-----END PGP SIGNATURE-----


More information about the Ipopt mailing list