[Ipopt] Using Ipopt for "one-shot" optimisation

Andreas Waechter andreasw at watson.ibm.com
Tue Mar 9 04:58:58 EST 2010


Hi Jens,

I'm not sure if I understand completely, but it seems to me that you are 
asking if it is possible to compute ALL problem information at once 
(values and derivatives) when new values of the optimization variables x 
are used, and not just separately as the Ipopt methods/functions seem to 
indicate.

If that is your question, the answer is "yes".  You can make use of the 
"new_x" flag, see, e.g.,

http://www.coin-or.org/Ipopt/documentation/node37.html

If Ipopt calls any of the evaluation methods with a value of "x" that is 
new (i.e., none of the evaluation methods have been call with before), 
this flag is true.  If Ipopt has called an evaluation methods with those 
values of x before (e.g., it has called eval_g with those x before and it 
is now calling eval_grad_f), the flag is false.

So, if you want to compute all problem information at once, you can just 
check at the beginning of all evaluation methods if the flag is true.  If 
it is true, you can call some internal method in your class that computes 
all problem information and stores (caches) it in private members.  Then, 
all evaluation methods can later return the cached data when they are 
called with new_x set to false.  Note that we do not assume a fixed order 
in which the evaluation methods will be called by Ipopt, so you need to 
check new_x in every evaluation methods.  In the Ipopt source code, you 
find an example of the usage of this feature in the AMPL interface code, 
Ipopt/src/Apps/AmplSolver/AmplTNLP.cpp; look for "apply_new_x".

If you are not using the C++ interface, you can use either global 
variables to cache the precomputed data, or you can make use of the 
UseDataPtr user_data (C interface) or the IDAT, DAT arguments (Fortran 
interface) in the Ipopt interface functions.  Those are meant to provide a 
way to the user to pass around memory through the Ipopt call without Ipopt 
touching it.

Note that during the line search, Ipopt might ask for function values at a 
trial point, but not for the derivative information at this point later on 
if that trial point was rejected.  So, always computing derivative 
together with regular function values might cause some overhead because 
unnecessary information is computed.

I hope this help,

Andreas

On Thu, 25 Feb 2010, Jens-Dominik Mueller wrote:

> Hello,
>
> we are looking for an opt. package that we can couple to our flow and
> adjoint (gradient) solver for CFD. To reduce the heavy computational
> cost of the function and gradient evaluations, we are using a one-shot
> method that solves the KKT in a block coupled fashion as shown below.
>
> In particular, after an initial convergence to get a rough approximation
> of functional and gradient, in the design loop we are continuously
> converging the state and adjoint while updating the design/control
> variables x. This is nicely done with a callback implementation of the
> optimiser, such as the classic Northwestern LBFGS that we currently are
> using.
>
> Can we achieve sth. similar with Ipopt?
>
> Many thanks, Jens.
>
>
> - Initial convergence of flow and adjoint, say 1-2 O of magnitude
> - evaluate functional and gradient
>
> - design loop:
>   call optimiser
>   if ( optimiser says new_x )
>     apply new_x to grid
>     converge flow, adj some more
>     evaluate functional and gradient
>   else ( optimiser says done )
>     done
>   else ...
>    ..
>   end if
>
>
> ---------------------------------------------------------------
>   Dr. Jens-Dominik Müller
>   School of Engineering and Materials Science
>   Mile End Road
>   Queen Mary, University of London
>   London, E1 4NS
>
>   tel: (+44) (0) 20 7882 5421, fax: (+44) (0) 20 8983 1007
>
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt
>
>


More information about the Ipopt mailing list