[Ipopt] Algorithm clarification

Panos Lambrianides panos at soe.ucsc.edu
Thu Jan 3 00:34:28 EST 2019


Hi Everyone,
I have an algorithm clarification question.  I have noticed on numerous
occasions where I am solving a nonlinear optimal control objective that I
know is positive, I get much faster results if I additionally constrain the
problem.

In particular, suppose that my objective is J(x) is positive.  If I try to
just minimize
Case I:
[image: \min_x J(x)]


versus
Case II:
[image: \min_x J(x)]
subject to
J(x) < r
where r is some positive arbitrary design parameter that I impose.

I have found that imposing the constraint in case II not only gives me a
better minimum, but it also has the effect of faster convergence.  In fact
often Ipopt will not even get close to the minimum obtained in Case II.  Of
course in case II, I have to run it multiple times with different values
of  decreasing r to get the best minimum possible, but I get much better
results.

Any ideas why this should be happening?  I have a suspicion that this has
something to do with saddle points and the fact that there is a different
algorithm for the determination of constraints that may force the algorithm
to jump to a different region, but I don't know enough about the internals
of Ipopt to make a determination.  It is possible that this may present an
opportunity to improve the optimization algorithm.

Best,
-- 
Panos Lambrianides
panos at soe.ucsc.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/ipopt/attachments/20190102/6673c738/attachment.html>


More information about the Ipopt mailing list