[Ipopt] New Interface Suggestion for PDE constrained optimization

Drosos Kourounis drosos at stanford.edu
Wed Nov 18 22:05:46 EST 2009


Dear Andreas,

thank you very much for the extensive email and your suggestions. I
think I didn't explain you clearly what is the case, and I think you
are missing a tiny detail, which changes the situation a lot. So here
is the whole story:

After the call to Simulator::simulate(), you have all the information
you need for the evaluation of all the virtual functions:

[ eval_f, eval_g, eval_grad_f, eval_grad_g ]

possibly hessian as well, although we do not use it at the moment.


"You do not need to call the simulator for each single one of them."

This is the point I think you misunderstood.  The simulator has to run
once before the Ipopt initialization phase, and once before each Ipopt
iteration and this is the case for all PDE constraint optimization
problems, which is a huge class of problems. There the objective 
and its gradient and constraints and their gradient can be 
evaluated after the PDE-solver used (the simulator in our case) 
has finished running.

I understand that your TNLP interface is quite general and possibly
can be hacked to do the job we want. But I think you understand that
it has to be hacked in a different way for different problems (i.e.
Constrained/Unconstrained optimization problems). I describe below 
a simple approach which requires minimal modification of the TNLP
interface and maintains backwards compatibility, while at the same
time solves our problem once and for all, and provides a new more
general interface which will make people working in PDE constrained
optimization happier.

===========================================

We need one more virtual function (not pure), empty in the base class
TNLP::simulate(), that can be called once before the initialization of
IPOPT (eval_grad_g for structure etc.) and that should also be called
in the beginning of each IPOPT iteration before anything else. Users 
who do not overload it are backwards compatible, since this function
is empty and does nothing for them. Users who need it,
can overload it placing inside its definition the call to
their own Simulator::simulate(). No need for case dependent hacks that
hurt the readability and the simplicity of the code.

===========================================

As you see, this is very simple and very easy to implement. If you
would let me know which files to modify, I could do it for
you. Honestly, any hack around this problem I can think of is case
dependent (i.e. different for constrained/unconstrained problems). 

In constrained optimization eval_grad_g is called before eval_grad_f; 
in unconstrained problems, eval_grad_g and related ones are not called at
all. So I could put the call to the Simulator::simulate() in the
eval_grad_g, but when I am running the unconstrained case, it should
reside in eval_grad_f, and I also should make sure that if we have
constraints (m != 0, in eval_grad_f ) this won't be called because the
call then has to reside in eval_grad_g, since that one is called
before eval_grad_f. However, the argument list of eval_grad_f doesn't
know the number of the constraints and one has to save them as a
private member of his class and initialize it in get_problem_info. 
As you see this is more complicated and uglier
hack around than the simple solution above which is straight
forward for someone with experience in the Ipopt source.

So what do you think?

Best wishes,
Drosos.



More information about the Ipopt mailing list