[Ipopt] New Interface Suggestion for PDE constrained optimization
Andreas Waechter
andreasw at watson.ibm.com
Fri Nov 20 14:04:06 EST 2009
Hi Drosos,
If I understand you correctly, you need to run a simulator program any
time there are new values for the variables x (you call that iteration,
but during the line search Ipopt might request function values for several
points, not just one). As I tried to explain to you before, there is
already a mechanism in place in the TNLP class that should make that
possible for you:
The new_x flag is set to true if and only if NONE of the eval_* methods
has seen these values of x before. So, it means that if new_x is true you
can call your simulator function, compute everything you need for all the
eval_* functions at once, store is somewhere, and at later calls to other
eval_* method you can just use that precomputed data when new_x is false.
So, all you have to do is add the line
if (new_x) Simulator::simulate(x)
at the beginning of the eval_* methods in the TNLP. I assume that your
simulate method needs to know the values of the optimization variables in
order to know what to simulate.
With this, there is no need to make any changes to the TNLP class. All
your suggested modification would do is put the above line outside the
TNLP methods calls, just before the calls to the eval_* methods in the
Ipopt code. Your "reset" method is something we could have done
alternatively, but we decided for the new_x flag, and I see no need to add
redundant functionality.
I hope this helps,
Andreas
On Wed, 18 Nov 2009, Drosos Kourounis wrote:
> Dear Andreas,
>
> thank you very much for the extensive email and your suggestions. I
> think I didn't explain you clearly what is the case, and I think you
> are missing a tiny detail, which changes the situation a lot. So here
> is the whole story:
>
> After the call to Simulator::simulate(), you have all the information
> you need for the evaluation of all the virtual functions:
>
> [ eval_f, eval_g, eval_grad_f, eval_grad_g ]
>
> possibly hessian as well, although we do not use it at the moment.
>
>
> "You do not need to call the simulator for each single one of them."
>
> This is the point I think you misunderstood. The simulator has to run
> once before the Ipopt initialization phase, and once before each Ipopt
> iteration and this is the case for all PDE constraint optimization
> problems, which is a huge class of problems. There the objective
> and its gradient and constraints and their gradient can be
> evaluated after the PDE-solver used (the simulator in our case)
> has finished running.
>
> I understand that your TNLP interface is quite general and possibly
> can be hacked to do the job we want. But I think you understand that
> it has to be hacked in a different way for different problems (i.e.
> Constrained/Unconstrained optimization problems). I describe below
> a simple approach which requires minimal modification of the TNLP
> interface and maintains backwards compatibility, while at the same
> time solves our problem once and for all, and provides a new more
> general interface which will make people working in PDE constrained
> optimization happier.
>
> ===========================================
>
> We need one more virtual function (not pure), empty in the base class
> TNLP::simulate(), that can be called once before the initialization of
> IPOPT (eval_grad_g for structure etc.) and that should also be called
> in the beginning of each IPOPT iteration before anything else. Users
> who do not overload it are backwards compatible, since this function
> is empty and does nothing for them. Users who need it,
> can overload it placing inside its definition the call to
> their own Simulator::simulate(). No need for case dependent hacks that
> hurt the readability and the simplicity of the code.
>
> ===========================================
>
> As you see, this is very simple and very easy to implement. If you
> would let me know which files to modify, I could do it for
> you. Honestly, any hack around this problem I can think of is case
> dependent (i.e. different for constrained/unconstrained problems).
>
> In constrained optimization eval_grad_g is called before eval_grad_f;
> in unconstrained problems, eval_grad_g and related ones are not called at
> all. So I could put the call to the Simulator::simulate() in the
> eval_grad_g, but when I am running the unconstrained case, it should
> reside in eval_grad_f, and I also should make sure that if we have
> constraints (m != 0, in eval_grad_f ) this won't be called because the
> call then has to reside in eval_grad_g, since that one is called
> before eval_grad_f. However, the argument list of eval_grad_f doesn't
> know the number of the constraints and one has to save them as a
> private member of his class and initialize it in get_problem_info.
> As you see this is more complicated and uglier
> hack around than the simple solution above which is straight
> forward for someone with experience in the Ipopt source.
>
> So what do you think?
>
> Best wishes,
> Drosos.
>
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt
>
>
More information about the Ipopt
mailing list