[Coin-ipopt] Re: Ipopt problem
Andreas Waechter
andreasw at watson.ibm.com
Mon Nov 29 10:52:50 EST 2004
Hi Kranthi,
> I had this problem with Ipopt which I wanted to post on the COIN_Ipopt
> forum. I thought I need to subscribe to make the posting but it seems
> not - I sent the request for subscription but it got rejected. Can you
> please look into it?
Sorry you had trouble subscribing and posting to the mailing list. The
list has moved to a different server, and I probably haven't updated all
links yet. Also, the website for the old list doens't say yet that it is
no longer valid - I will update that shortly.
In any case, the new URL for the coin-ipopt mailing list is
http://list.coin-or.org/mailman/listinfo/coin-ipopt
(where you should be able to subscripbe)
and you can post messages by sending email to coin-ipopt at list.coin-or.org
(Everyone else reading the mailing list, please update your address book)
Here my response to your question:
> Ipopt problem
> -------------
> I have been trying to use IPOPT to minimize a not-so-complex objective function
> with just one linear constraint and no bound constraints. The number of
> variables is around 700 and I am using the C interface to define the problem.
> These are some of the problems/issues that I came across:
> 1. The hessian is a diagonal matrix and hence the number of non-zeros is n, the
> no. of variables. If I write the following piece of code, it dumps core and
> exits:
>
> if (task == 0) {
> *nnzh = n;
> }
>
> while this works fine:
> if (task == 0) {
> *nnzh = n(n+1)/2;
> }
>
> For the latter case, in the "task == 1" block, I give "*nnzh = n" and then go on
> to define the n non-zero entries in the Hessian matrix and also the diagonal
> sparsity structure. The question I have is this: does it work fine if I give
> the no. of zeros DIFFERENTLY in "task==0" and "task==1" (because I define the
> sparsity structure only in "task == 1")? And why is there a core-dump when I
> define the no. of non-zeros as n in "task == 0" block?
The number of nonzeros MUST not change after you told Ipopt (when it
called your function with task=0) how many nonzeros there are.
Remember, that by nonzeros here we mean all entries in the Hessian (or
Jacobian) that can ever be nonzero. If for a particular point a nonzero
entry (defined in that way) actually evaluates to zero, then just put 0.
into the corresponding value array.
You shouldn't get a coredump with the first version of your code if you
functions are implemented correctly. Have you tried to locate the
particular place of the coredump using a debugger? Do you know where
exactly the coredump occures (is it a seg fault)?
> 2. All the entries of the Hessian at the starting point are of the order around
> 1e-07 and the constraint Jacobian has entries of the order -1e-04. If I use the
> default line search, it ends after 15 iterations with the following mesg with no
> improvement in the objective function value and constraint violation (I tried
> using different starting points but same result).
>
> Restoration phase problem converged.
> filter: Error: resto_filter returns IERR = 17
> solve_barrier: filter returns IERR = 17
> mainloop: Error: solve_barrier ends with IERR = 17
>
> I tried not using any line search (taking full Newston step -- by setting
> IMERIT=0) - this does improve the function value but only by the order 1e-06 at
> each iteration with no improvement in constraint violation (though the initial
> constraint violation is 1e05).
My first comment is that if the entries of the Jacobian are always on the
order of 1e-04 (i.e. not only at the starting point) and not larger, than
your problem is somewhat underscaled. Ipopt has some mechanisms to find
internal scaling factors; the default one only tries to handle
OVERscaling, i.e. cases in which the derivatives are the initial point are
very large. In general, I advise users to try to scale their problems so
that the nonzero elements in the objective and constraint function
gradients are roughly on the order of 1 to 100 for the points of interest.
(In your case, you might be able to have a better scaled problem by just
multiplying your constraints by 1e4...) However, finding a good scaling
for an optimization is not easy. Obviously, it would be great if the
optimization codes could do that on their own, but due to the nonlinear
nature of the functions that's in general a tough call. And, by the way,
a well-scaled problem makes the solution process easier for any nonlinear
optimizer, not just Ipopt.
Now to your real question: Since there might be something wrong with your
interface, it is possible that Ipopt doesn't receive the correct
derivative information, in which case it can easily fail. In case Ipopt
does get the correct gradients, the above message usually means that your
problem is infeasible, i.e. from the point of view of the current iterate,
there seems no way to improve feasibility. Is it possible that the
implementation of your constraints is not correct, and there is actually
no feasible point?
> I started using IPOPT only very recently (the reason I switched is because
> MATLAB optim toolbox works fine but takes around 5min to solve the problem!!),
> so any help is gladly appreciated. Also does it help if I use the Fortran
> interface instead of the C one?
It wouldn't make a difference if you use the C or Fortran interface - use
whatever you are more comfortable with.
By the way, Steinar Hauan and a student at CMU have been working on a
Matlab interface for Ipopt. If you are interested to learn more about
that, you could send an email to hauan at cmu.edu (and cc me).
If you can't find the reason for your coredump, you could send me your
source code (assuming that it is easy to compile :), and I could try to
have a look at it, making sure that it is not caused by a problem in the C
interface or somewhere else in Ipopt. (Please send the source code
directly to me, not the mailing list)
Andreas
More information about the Coin-ipopt
mailing list