<div dir="ltr">Hi Damien,<div><br></div><div>Since no one has answered I'll take a shot at it, though it's not my expertise. The case you described is:</div><div><br></div><div><div>minimize f(x) w.r.t {x}</div><div>
vs<br></div><div>minimize y w.r.t {x,y}, subject to y == f(x)<br></div></div><div><br></div><div>I'm not sure if there will be any difference in this case. However, I have had personal experience with:</div><div><br></div>
<div><div>minimize f(g(x)) w.r.t {x} [prob1]</div><div>vs<br></div><div>minimize f(y) w.r.t. {x, y} subject to y == g(x) [prob2]<br></div></div><div><br></div><div>In this case you can have significantly better convergence by "lifting" these variables out. [prob2] will compute a better search direction than [prob1]. This is why people use direct multiple shooting instead of direct single shooting sometimes. There are even specialized lifting solvers which can do linear algebra in the space of [prob1] and obtain the search direction in the space of [prob2], for example: <a href="http://num.math.uni-bayreuth.de/en/conferences/ompc_2013/program/download/friday/Diehl_ompc2013.pdf">http://num.math.uni-bayreuth.de/en/conferences/ompc_2013/program/download/friday/Diehl_ompc2013.pdf</a></div>
<div><br></div><div>Hope this helps,</div><div>Greg</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jun 11, 2014 at 8:52 PM, Damien <span dir="ltr"><<a href="mailto:damien@khubla.com" target="_blank">damien@khubla.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">All,<br>
<br>
I'm looking at two ways to formulate the objective function in a new optimisation model. The first way is what you'd call the conventional way I suppose, where you have a variety of variables contributing to the objective value and you calculate the gradient and return that to IPOPT. The other way I'm considering is to equate the objective function to a new variable in an extra equality constraint, and have the new variable as the only variable in the objective, with a gradient of 1.0. The new equality constraint then contributes first partial derivatives like any other equation or constraint.<br>
<br>
Mathematically (and I think algorithmically) these are equivalent, but I was wondering if anyone who's done this before has seen a performance difference between the two.<br>
<br>
Cheers,<br>
<br>
Damien<br>
______________________________<u></u>_________________<br>
Ipopt mailing list<br>
<a href="mailto:Ipopt@list.coin-or.org" target="_blank">Ipopt@list.coin-or.org</a><br>
<a href="http://list.coin-or.org/mailman/listinfo/ipopt" target="_blank">http://list.coin-or.org/<u></u>mailman/listinfo/ipopt</a><br>
</blockquote></div><br></div>