[Coin-ipopt] FW: IPOPT efficiency

Xiaoming Yu xiaoming.yu at mscsoftware.com
Tue Feb 5 19:54:35 EST 2008



-----Original Message-----
From: Xiaoming Yu 
Sent: Tuesday, February 05, 2008 4:53 PM
To: 'Andreas Waechter'
Subject: RE: IPOPT efficiency

Hi Dr. Wächter:

Thank you very much for taking time to answer my questions. It is really helpful. Now, I have other questions.

1) Our objective and constraints are separable functions. Thus, the W_k in Equations (11) and (13) in your paper is a diagonal matrix. Does it guarantee the matrix in the top left block in the matrix in (13) is a diagonal matrix for all cases ( I am not quite sure about Sigma_k)? I believe that IPOPT solves the Eq(13) instead Eq(11) due to the filter line-search step.

2) Looking at Eq(13), the lower right block in the matrix (n+m)*(n+m) is diagonal. Right?. If it true and A_k does have full rank (in our case, I expect many engineering cases), then, we can have d_k(lamda) in terms of d_k(x). Finally (inserting the first row in Eq 13), we can reduce the system (n+m)*(n+m) to a smaller system n*n. If M>>n (many constraints but less design variables). The reduced linear system is much easy to be solved. Or, do you think a good sparse linear solver can take care of that diagonal lower right block efficiently (so it is not necessary to reduce the linear system) ? Am I wrong here ?

3) Active set strategy.
We define a constraint to be active when the constraint is violated or close the bound. As you know, the active set strategy is used in many SQP algorithms. Is it possible to apply the active set strategy to IPOPT? There are two reasons for that.

(a) We usually have a huge number of constraints in engineering applications. However,  majority of the constraints are far from the bound.
(b) For engineering problems, the Jacobian matrix is often 100% dense. It is very hard to hold the Jacobian matrix for large problems (due to limited computer memory. For example, we have 1 million variables and 1 million constraints).

Applying the active set strategy in IPOPT, I think we have active constraints only  in Eq. (11) or (13) (A_k and W_k_ computation), then we have the search direction by solving the reduced system (only holding the active constraints). Well, the active constraints may switch in next iterations. The search direction may not be as good (robust) as the one with full constraint set. However, with the line search steps, I think we can solve many problems at least in more efficient way (due to a smaller linear system) with less computer memory requirement. This active set strategy may take more iterations to converge or even not converge. However, it does provide us a practical way to solve very large engineering problems from engineer's point of view. How do you think?

Best regards,

Xiaoming



-----Original Message-----
From: Andreas Waechter [mailto:andreasw at watson.ibm.com] 
Sent: Monday, February 04, 2008 12:04 PM
To: Xiaoming Yu; coin-ipopt mailing list
Subject: Re: IPOPT efficiency

Dear Xiaoming,

> I have a question regarding to the efficiency of IPOPT.
>
>
>
> I think the computational costs of primal-dual interior-point methods (IPOPT algorithm) are dominated by the
>
> cost of solving the linear systems Eq. (13) and  (32) on your paper A. Wächter and L. T. Biegler
> On the Implementation of an Interior-Point Filter Line-Search Algorithm for Large-Scale Nonlinear Programming
> Mathematical Programming 106(1), pp. 25-57, 2006
>
>
>
> My question is what is the dimension of the linear system? Let say we 
> have n design variables and m inequality constraints. If I have an 
> nonlinear optimization problems with 2 millions design variables and 
> just 10 inequality constraints, it appears very expensive to solve 
> Eq.(13) and Eq.(32). Is it right? The Eq(11) and EQ(32) must be solved 
> for each iteration. Right?

It is correct that a version of these systems has to be solved in every 
iteration.

As I wrote in my reply to your previous posting, how difficulty of the 
linear system depends on the density and position of the nonzero values in 
the Jacobian and Hessian matrices.

If you have a problem with 2 million variables but only 10 inequality 
constraints (I assume that more or less all variables appear in the 
constraints), the Jacobian is dense, but it has only 10 rows; this might 
not be too bad (depending on the linear solver you choose).  Storing the 
matrix will require on the order of 2,000,000 x 10 x 8 = 80 MB.  If the 
Jacobian is sparse, this is of course much less.

However, the question is what you do regarding the second derivative 
information.  If all the Hessian matrix of the objective or the 
constraints is dense, the W matrix in the linear systems will also dense 
(and of size of 2 million), which is way more than a PC can handle.  In 
this case, you will need to use the limited-memory approximation of the 
derivatives.  As I wrote in my previous message, whether this works fine 
depends on the particularities of your problem.

If you use a limited-memory option, the matrix in (13) has W_k only as a 
diagonal matrix, and A_k^T as the constraint Jacobian, so except for the 
10 constraint rows it is very sparse.

Sometimes, it is possible to demagle the nonlinearities in objective or 
constraint functions in the problem formulation, by using additional 
variables, and to make the derivative matrices sparser (and the functions 
"less nonlinear"), and even though the problem size increases, the overall 
performance of the optimizer is better.

Regards,

Andreas

>
>
>
>
>
>
>
>
>
>




More information about the Ipopt mailing list