[Coin-ipopt] Some limitations with IPOPT

Andreas Waechter andreasw at watson.ibm.com
Wed Aug 25 12:28:46 EDT 2004


Hi Jens,

Many thanks for your feedback.  It is very good to hear that Ipopt
performs well for some of your problems!

I had a look at the Ipopt output you sent.  The first few iterations there
is the following message:

 *** WARNING MESSAGE FROM SUBROUTINE MA27BD  *** INFO(1) = 3
     MATRIX IS SINGULAR. RANK=45502

This occurs repeatedly, which usually means that there is some structural
deficiency in your problem.  In this case, Ipopt concludes that the
constraint Jacobian is rank-deficient.  After that, Ipopt uses a slightly
perturbed linearization of the constraints to compute the steps.  This
avoids the numerical problem, but the steps might be poorer (and lead to
more iterations).  If you are interested, you can find some details in
Section 3.1 of

http://www.research.ibm.com/people/a/andreasw/papers/ipopt.pdf

Rank-deficiency of the constraint Jacobian matrix means that there doesn't
seem to be a square submatrix of this matrix (with the same number of rows
as the full Jacobian) that is non-singular.

In your particular case, you fix quite a number of variables (4525), i.e.
you set the lower bound for some variables equal to the upper bound.
Currently, Ipopt handles this by removing those variables internally from
the problem statement it solves.  In particular, it removes the columns
corresponding to the fixed variables from the constraint Jacobian.  As a
consequence, it might happen that even if your original Jacobian had full
rank, the smaller Jacobian without those columns might be rank deficient.

In order to avoid that rank-deficiency is introduced in this way, you
might want try to not fix the variables, but instead give them bounds that
are very close to the fixed value of the variables.  For example, if you
have a variable X(i), that you want to fix to 1.d0, don't set the lower
and upper bound both to 1.d0, but instead set the lower bound to, say,
0.9999999d0, and the upper bound to 1.0000001d0.  This might avoid the
rank-deficiency that appears for the current runs.  On the other hand, now
of course the problem has some very tight bounds which might make it hard
in some other ways.  But we have tried this in the past, and it can work.
You might want to experiments with perturbations of different magnitudes.

SNOPT probably doesn't have any problems with this, because it might
handle fixed variables differently, and/or has better ways to handle
degeneracies.

Looking at the PARAMS.DAT file, I noticed that you chose a very large
value for the termination tolerance (1.d-1) - that usually wouldn't give
you a very accurate solution.  Did you only do this in this case, to make
Ipopt stop at all?  Also, I noticed that you don't use the filter line
search, but the augmented Lagrangian function.  Did the filter (which is
default) not work for you?  Also, you swtiched off the scaling (iscale=0)
- did that not work for you?  Finally, the value of ddepcondiag is fairly
large (1d-2) - this value determines how much you "relax" the
linearization of the constraints when they seem rank-deficiient
($\bar\delta_c$ in the paper).  I would think this is much too large to
give you a good solution at the end.

I hope that posing the fixed variables in a different way as described
above helps.  It would be very nice if you could let us know what you find
out.

Thanks again for your feedback,

Andreas



On Wed, 25 Aug 2004 jens.pettersson at se.abb.com wrote:

> Hi,
>
> I'm running a project with process optimization where we solving large
> non-linear programs. The constraints are discretized DAE-models of the
> process and the objective is the squared difference between some process
> variables and their setpoints.Thus, the size of the NLP depends on both
> the size of the process model(no of equation and variables) and the number
> of discretization points.
>
> As solvers we are using both SNOPT and IPOPT and we can easily switch
> between these solvers on the same application which makes up for a fair
> comparison. If we consider smaller models with a relatively large number
> of discretization points, IPOPT is performing very well. We have
> succesfully solved NLP:s with 80000 variables within a couple of minutes
> on a modern laptop. Typically the number of iterations for IPOPT in these
> cases are below 20.
>
> However, for our larger process models with some 1900
> differential-algebraic equations IPOPT has some major problems already
> above some 10 discretization points. If we are lucky, it solves the
> problem under 100 iterations but many times it fails. SNOPT, on the other
> hand is very robust for these problems (se attached dump-files).
>
> Which parameters may have an affect on the performance of IPOPT ?? I have
> enclosed the PARAMS-file as well.
>
> //Jens




More information about the Coin-ipopt mailing list