[Ipopt] Optimality guarantees for convex problems?

Frank E. Curtis frank.e.curtis at gmail.com
Wed Nov 11 09:20:26 EST 2009


Hello,

It should be noted that IPOPT, like most of the best solvers, runs an
"infeasible" algorithm, meaning that the iterates -- even once you've
finally "converged" to an optimal solution -- are going to be slightly
infeasible, say on the order of 1e-8.  That is, assuming you have
equality constraints or inequality constraints that are not all
strictly satisfied...

Therefore, it seems that you could probably construct a (very weird)
example where, due to nonconvexity of the objective slightly outside
of (but extremely close to) the feasible region F, you converge to a
local solution that is not global.  In other words, if F' is the set
of points including the feasible region F *and* a small envelope
around F, one can imagine that there are local minimizers in F' that
are not global minimizers.

But again, it seems very unlikely that this would occur naturally
beyond simply pedagogical examples...

Frank E. Curtis

On Wed, Nov 11, 2009 at 3:56 AM, Ruhollah Tavakoli <rohtav at gmail.com> wrote:
> Lasse,
>
> If you want to minimize a nonconvex function on set D, if your objective
> functional be strictly convex on D and set D be a closed convex set, then
> (under common conditions) your constrained optimization problem has a unique
> (global) minimizer.
>
> in this case, IPOPT will be enable to find your global solution.
>
> Hope this helps
>
> RT
>
>
> On Wed, Nov 4, 2009 at 2:18 PM, Lasse Kliemann
> <lasse-list-ipopt-2009 at mail.plastictree.net> wrote:
>>
>> Hi Andreas,
>>
>> thank you for the reply. My question is partly answered. Let me
>> emphasize one point that I am not sure about yet.
>>
>> Given your formulation of the minimization problem below, define
>> the _feasible set_ by
>>
>>  F := { x | c(x) =0 and d(x) <= 0 }
>>
>> Let c be linear and d be convex, so F is a convex set.
>>
>> Let furthermore be the objective f convex on F. But we do not
>> know how f behaves outside of F. Maybe we only know how f is
>> defined outside, and that it is differentiable. Assume this if
>> necessary.
>>
>> A _local minimizer_ is a point x_0 in F such that there exists a
>> real number R>0 such that for all x in F
>>
>>   ||x-x_0|| <= R  --->  f(x_0) <= f(x).
>>
>> (I presume "first order optimal point" means exactly that?)
>>
>> Now, it is clear that a local minimizer is a global minimizer,
>> i.e., a point x_0 such that f(x_0) = min {f(x) | x in F}. This
>> follows from convexity of f on F.
>>
>> But would Ipopt need convexity or some other nice properties
>> *outside the feasible set F*, in order to find a local minimizer?
>> I could imagine that the behavior of f close to the boundary of F
>> might play at least a role. But I am not sure.
>>
>> Thank you
>> Lasse
>>
>> * Message by -Andreas Waechter- from Tue 2009-11-03:
>> > Hi Lasse,
>> >
>> > If you write your optimization problem in the form
>> >
>> > min  f(x)
>> > s.t. c(x) = 0
>> >      d(x) <= 0
>> >
>> > and f(x) and d(x) are convex, and c(x) is linear, then any first
>> > order optimal point (those are the points found my Ipopt) is a
>> > global solution. Therefore, in that case Ipopt will find the global
>> > optimum.
>> >
>> > However, if any of the functions is not convex (or the equality
>> > constraints c(x) are nonlinear), this is no longer true, and Ipopt
>> > is guarateed by theory only to find a first order optimal point,
>> > which might not be the global solution.  So, if you objective
>> > function is non-convex, Ipopt is not guaranteed to find a global
>> > solution.  (Ipopt has measures that attempt to steer it towards a
>> > local minimizer, though.)
>> >
>> > But if your objective function is for some reason of a form so that
>> > every first order optimal point is a global solution (e.g., if it is
>> > pseudo-convex), then Ipopt will find the global solution.  However,
>> > this can be a rather special case.
>> >
>> > Hope this helps,
>> >
>> > Andreas
>> >
>> > On Sat, 31 Oct 2009, Lasse Kliemann wrote:
>> >
>> > >Greetings,
>> > >
>> > >I have questions concerning the behavior of Ipopt on convex
>> > >problems. The documentation reads:
>> > >
>> > >  It is important to keep in mind that the algorithm is only
>> > >  trying to find a local minimizer of the problem; if the
>> > >  problem is nonconvex, many stationary points with different
>> > >  objective function values might exist, and it depends on the
>> > >  starting point and algorithmic choices which particular one
>> > >  the method converges to.
>> > >
>> > >Now, assume I have a problem expressed with smooth functions
>> > >(say, polynomials) and an objective function that is convex on
>> > >all R^n. Can I expect Ipopt to deliver a globally optimal
>> > >solution then (probably up to small inaccuracies)?
>> > >
>> > >If so, my second question concerns the case when the objective
>> > >function is not convex on R^n, but at least the set K of feasible
>> > >points is convex and the objective function is convex on K. Can I
>> > >expect Ipopt to deliver a globally optimal solution ("globally"
>> > >meaning: on K) then (probably up to small inaccuracies)?
>> > >
>> > >The latter occurs frequently in my application, so it would be
>> > >interesting to know.
>> > >
>> > >By the way, thank you for your work on Ipopt!
>> > >
>> > >Kind regards
>> > >Lasse
>> > >
>>
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt
>>
>
>
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt
>
>




More information about the Ipopt mailing list