[Ipopt] empty ipopt.out + objective's decrease
Andreas Waechter
andreasw at watson.ibm.com
Mon Oct 19 10:24:35 EDT 2009
Hi Denis,
(Please do not attach config.log files to a posting a the mailing list -
the message gets too long; if you want to communicate this, please create
a ticket)
1) To get output into a file you need to use the option
output_file
and set it to the name of the file into which you want Ipopt output to be
written.
2) In general, it is not to be expected that the objective function
decreases monotonically. Ipopt attempts to achieve optimality and
feasibiltiy at the same time, and in order to get feasible, the objective
function might have to be increased. Note that due to the
nonlinearity of constraints, intermediate iterates might become infeasble
even if the starting point is feasible. Also, what can sometimes be
confusing is that the objective function value printed in the output is
the value of the original objective function, while Ipopt internally bases
the acceptance of a trial point on the value of the barrier objective
function (see the implementation paper).
As for not arriving at the objective function value that you think is
optimal: We cannot guarantee that Ipopt converges really to a minimizer
of the problem - we only can garantee that it converges to a stationary
point (this is when the termination tests are satisfied). Ipopt attempts
to encourage convergence to a minimizer, e.g., by regularizing the Hessian
matrix (see the "lg(rg)" column in the output). If you converge to a
strict local minimzer, there should be no correction at the end of the
opitmization. In the output you sent, we can see that there are still
Hessian correction in the last iterations, and this is an indication that
Ipopt is converging to a saddle point, and not a strict local minimizer.
The thing I would try is to experiment with the starting point (sometimes
an infeasible starting point is better), or to see if there are equivalent
formulations of the optimization problem that has fewer stationary points
besides minima, or just happens to guide the optimizer in a different
direction. (And from the given output, there is no immediate indication
that your code is incorrect, since convergence is pretty smooth...)
Hope this helps,
Andreas
On Fri, 16 Oct 2009, Denis Davydov wrote:
> Dear All,
>
> Could you please advise,
> I modified original C++ example (hs071_cpp) according to my problem and
> experience some difficulties:
>
> 1) output of Ipopt is empty. I see it in terminal, but that's it. Running
> with ./hs071_cpp > myOtput.txt writes only some of my output with
> "std::cout", but not general Ipopt output. Same with "nohup". Original C
> example is ok and writes output. I have not modified any options regarding
> output while coding.
>
> Version of Ipopt: 3.7.0, solver - ma57; compilers: gcc, f++, fgortran --
> version 4.3.2 (Debian 4.3.2-1.1), 64bit.
>
> Configuration output is enclosed for any case.
>
>
> 2) objective function does not decrease constantly and even increases
> constantly sometimes. I mean, I known the solution of this test problem:
> objective = -sqrt(3.0), therefore it's strange a bit for me that the
> solution goes in a wrong direction in some sense. Of course this is
> constraint optimization, I don't know all the values of variables, current
> point can violate constraints etc. I'm not very much into method, but this
> is somehow strange for me. Could it be a result of my bad-coding?
>
> iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du
> alpha_pr ls
> 0 1.0000000e+00 5.64e-13 1.00e+00 0.0 0.00e+00 - 0.00e+00
> 0.00e+00 0
> 1 9.9993651e-01 3.07e-09 1.01e-02 -6.2 6.35e-05 -4.0 9.90e-01
> 1.00e+00f 1
> 2 8.1572706e-01 1.13e-02 1.09e-04 -2.9 1.84e-01 -3.6 9.92e-01
> 1.00e+00f 1
> 3 4.0527557e-01 5.62e-02 7.35e-05 -4.7 4.10e-01 -4.1 9.99e-01
> 1.00e+00h 1
> 4 -4.5422878e-03 5.60e-02 1.15e-04 -9.2 4.10e-01 -3.6 1.00e+00
> 1.00e+00h 1
> 5 -7.4555060e-03 2.83e-06 2.27e-03 -11.0 5.60e-02 -1.4 1.00e+00
> 1.00e+00h 1
> 6 -7.9025588e-03 6.66e-08 1.87e-04 -11.0 4.47e-04 - 1.00e+00
> 1.00e+00h 1
> 7 -2.1786486e-02 6.43e-05 2.29e-04 -11.0 1.39e-02 -1.9 1.00e+00
> 1.00e+00f 1
> 8 -1.1153940e+00 3.99e-01 2.70e-04 -11.0 4.37e+00 - 1.00e+00
> 2.50e-01f 3
> 9 -9.6325697e-01 7.72e-03 1.28e-03 -11.0 2.86e-01 -2.3 1.00e+00
> 1.00e+00h 1
> iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du
> alpha_pr ls
> 10 -1.1657772e+00 1.37e-02 4.71e-04 -11.0 2.03e-01 - 1.00e+00
> 1.00e+00h 1
> 11 -9.5656068e-01 9.10e-04 3.42e-04 -11.0 2.03e-01 -2.8 1.00e+00
> 1.00e+00H 1
> 12 -8.7525035e-01 5.49e-05 3.25e-04 -11.0 8.03e-02 -2.4 1.00e+00
> 1.00e+00H 1
> 13 -7.0099697e-01 3.27e-05 2.32e-04 -11.0 1.69e-01 -2.9 1.00e+00
> 1.00e+00H 1
> 14 -6.7595156e-01 1.39e-06 8.89e-05 -11.0 2.50e-02 -2.4 1.00e+00
> 1.00e+00H 1
> 15 -6.7277849e-01 2.79e-09 3.76e-06 -11.0 3.17e-03 -2.9 1.00e+00
> 1.00e+00H 1
>
>
>
> p/p/s/ problem I solve is quite large:
>
> Number of nonzeros in equality constraint Jacobian...: 577081
> Number of nonzeros in inequality constraint Jacobian.: 144000
> Number of nonzeros in Lagrangian Hessian.............: 504000
>
> Total number of variables............................: 144001
> variables with only lower bounds: 0
> variables with lower and upper bounds: 0
> variables with only upper bounds: 0
> Total number of equality constraints.................: 131286
> Total number of inequality constraints...............: 24000
> inequality constraints with only lower bounds: 0
> inequality constraints with lower and upper bounds: 0
>
> so where is no way to run Derivative checker or approximation. There is
> also no way to decrease number of variables.
>
>
>
> Thank you in advance,
> Best regards,
> Denis.
>
>
> --
> Denis Davydov
> Department of Mechanics
> Faculty of Civil Engineering
> Czech Technical University
> Thakurova 7, 166 29 Prague 6, Czech Republic
> phone: +420-728478669
> email: denis.davydov at fsv.cvut.cz
>
>
>
More information about the Ipopt
mailing list