[Ipopt] accessing Lagrange multipliers in all callback routines

Ian Washington washinid at mcmaster.ca
Wed Jun 25 20:41:05 EDT 2014


Hello All,

I currently have an NLP with expensive constraint function evaluations
which requires the solution of an embedded ODE or DAE (via an adaptive
integration solver) (i.e., dynamically-constrained NLP).

Ideally I would like to evaluate the embedded DAE states/sensitivities
only once for a given set of NLP variables (and only if these variables
change) per major optimization function eval (i.e., objective,
constraint, jacobian, hessian callbacks). This is pretty straight
forward using global variables where, for example, I would evaluate all
required functions (objective, constraints) and first derivatives in
lets say ipopt's objective function callback and then make these results
available in the remaining callback routines via global variables.

However, when generating exact second order information (of the
constraints involving the embedded model states) using, for example, a
forward over adjoint second order sensitivity analysis, I need to pass
the Lagrange multipliers of the constraints in question to the embedded
DAE sensitivity solver. So, if I am to generate this second order
information within ipopt's objective function callback I would need
access to the internal Lagrange multipliers. Currently it seems only
possible to get this information directly within the Lagrangian hessian
callback.

So my question is, does anybody know of any possible way to make the
internal Lagrange multipliers available to callback routines other than
just the hessian callback.

Thanks,
Ian.





More information about the Ipopt mailing list