[Ipopt-tickets] [Ipopt] #290: objective function increasing instead of decreasing with each iteration

Ipopt coin-trac at coin-or.org
Thu Sep 14 09:34:47 EDT 2017


#290: objective function increasing instead of decreasing with each iteration
-----------------------+---------------------------------------------------
  Reporter:  kamilova  |      Owner:  ipopt-team
      Type:  task      |     Status:  new
  Priority:  highest   |  Component:  Ipopt
   Version:  3.12      |   Severity:  critical
Resolution:            |   Keywords:  increasing objective function fortran
-----------------------+---------------------------------------------------

Comment (by kamilova):

 I have the derivative checker activated and I get errors in most of them,
 but of order e-2. This problem was already solved with a different
 optimisation method, and it managed to achieve a suboptimal point despite
 the errors in the gradient, which is why I still thought it should reach
 some point.

 With the previous optimisation, I get values of up to 9.1 e-1, whereas
 with Ipopt I start with about 8.8 e-1, and end with 6.7 e-1, which is why
 I thought that it's maximising instead of minimising?

 Is there any particular reason it would have this error in Ipopt, but
 achieve a (much) better point with SQP optimisation (which is not the best
 method for this NLP but works "sometimes") ?

 Replying to [comment:1 stefan]:
 > It seems that Ipopt struggels to make improvements in primal and dual
 feasibility. It is ok if the objective value is not decreasing at that
 time.
 >
 > You might want to enable the derivative checker to check whether your
 gradient implementation is correct. If so, then maybe try finding a better
 starting point.

--
Ticket URL: <https://projects.coin-or.org/Ipopt/ticket/290#comment:2>
Ipopt <http://projects.coin-or.org/Ipopt>
Interior-point optimizer for nonlinear programs.



More information about the Ipopt-tickets mailing list