# [Coin-ipopt] IPOPT efficiency

Andreas Waechter andreasw at watson.ibm.com
Mon Feb 4 15:04:04 EST 2008

```Dear Xiaoming,

> I have a question regarding to the efficiency of IPOPT.
>
>
>
> I think the computational costs of primal-dual interior-point methods (IPOPT algorithm) are dominated by the
>
> cost of solving the linear systems Eq. (13) and  (32) on your paper A. Wächter and L. T. Biegler
> On the Implementation of an Interior-Point Filter Line-Search Algorithm for Large-Scale Nonlinear Programming
> Mathematical Programming 106(1), pp. 25-57, 2006
>
>
>
> My question is what is the dimension of the linear system? Let say we
> have n design variables and m inequality constraints. If I have an
> nonlinear optimization problems with 2 millions design variables and
> just 10 inequality constraints, it appears very expensive to solve
> Eq.(13) and Eq.(32). Is it right? The Eq(11) and EQ(32) must be solved
> for each iteration. Right?

It is correct that a version of these systems has to be solved in every
iteration.

As I wrote in my reply to your previous posting, how difficulty of the
linear system depends on the density and position of the nonzero values in
the Jacobian and Hessian matrices.

If you have a problem with 2 million variables but only 10 inequality
constraints (I assume that more or less all variables appear in the
constraints), the Jacobian is dense, but it has only 10 rows; this might
not be too bad (depending on the linear solver you choose).  Storing the
matrix will require on the order of 2,000,000 x 10 x 8 = 80 MB.  If the
Jacobian is sparse, this is of course much less.

However, the question is what you do regarding the second derivative
information.  If all the Hessian matrix of the objective or the
constraints is dense, the W matrix in the linear systems will also dense
(and of size of 2 million), which is way more than a PC can handle.  In
this case, you will need to use the limited-memory approximation of the
derivatives.  As I wrote in my previous message, whether this works fine
depends on the particularities of your problem.

If you use a limited-memory option, the matrix in (13) has W_k only as a
diagonal matrix, and A_k^T as the constraint Jacobian, so except for the
10 constraint rows it is very sparse.

Sometimes, it is possible to demagle the nonlinearities in objective or
constraint functions in the problem formulation, by using additional
variables, and to make the derivative matrices sparser (and the functions
"less nonlinear"), and even though the problem size increases, the overall
performance of the optimizer is better.

Regards,

Andreas

>
>
>
> Thanks,
>
>
>
> Xiaoming Yu
>
> MSC.Software Corp.
>
> 2 MacArthur Place
>
> Santa Ana, CA 92707
>
> USA
>
>
>
>
>
>
>
>
>
>
```