[CppAD] Trying to get Sparse Hessian Calculation Faster

Brad Bell bradbell at seanet.com
Fri Sep 4 10:46:51 EDT 2009


I would suggest that try using the example CppAD interface to Ipopt 
which is documented in
    http://www.coin-or.org/CppAD/Doc/ipopt_cppad_nlp.xml

You can start with the simple representation which has an example in
    http://www.coin-or.org/CppAD/Doc/ipopt_cppad_simple.cpp.xml

If the operations sequence for your function does not depend on argument 
values, you can set info.retape to false; see
    
http://www.coin-or.org/CppAD/Doc/ipopt_cppad_nlp.xml#fg_info.fg_info.retape
In this case, ipopt_cppad will automatically call the CppAD sparsity 
routines to determine the sparsity patterns.

But for large sparse problems you will probably find a representation 
that communicates some of the sparseness structure to CppAD is 
neseccary, For example, see
    http://www.coin-or.org/CppAD/Doc/ipopt_cppad_ode.xml


einglez at terra.com.br wrote:
>   BODY { font-family:Arial, Helvetica, sans-serif;font-size:12px; } 
>
> 	Hi, 
> 	At the moment I have coupled Ipopt with CppAD in order to solve my
> non-linear optimization problem. I did it and to test the performance
> I am using a scalable problem. 
> 	When the problem is small n = 100 then things converge reasonably
> fast compared with the exact hessian and jacobiam calculations. 
>
> 	On the other hand, when I set n to 1000 I simply cannot have the
> results. The time required is enormous.  
> 	I have used two strategies, which I will expose further, but none
> has performed satisfactorily. 
> 	Strategy 1: Use SparseHessian routine 
> 	Actually Ipopt requires the Hessian of the Langrangian that has
> multipliers for each of the constraints and objective function
> Hessian. Because of that I was not able to properly compute the
> Hessian of the Langrangian at once by calling SparceHessian routine.
> I believe it should be done internally. One alternative would be to
> compute each of the constraint and objective function per time, but
> that causes the performance to be inefficient. 
> 	Strategy 2: Use Fortwo routine 
> 	Ipopt requires the non-zero structure of the hessian, as well as its
> position (irow[NonZeros] and jcol[NonZeros]. I can calculate it all
> and use to specifically compute the partial derivative I want. Also I
> compute each constraint and objetive function per type on the irow and
> jcol nonzero points of the Hessian of the Langrangian. 
>
> 	I thought it would be quick because the problem is sparse and I
> won’t need to compute the n x n space.  
>
> 	For some reason I don’t know the reason it is behaving even
> slower. I suspect that Fortwo is supposed to compute over the n
> forward dimension even if I don’t need then. Is that true? 
> 	Another idea I had is to modify Hessian to compute only the points I
> know it is a non-zero position (irow, jcol). Ipopt use the tripled
> format for sparse matrix computations. 
> 	What I actually need is some orientation to change the codes in
> order to make sparse computation fast enough to make Ipopt and CppAD
> coupling effective. Thanks for any help and tip. 
>
> 	I know that it is on the wish list and I would like to help with
> that. 
> 	Regards, 
> 	Eduardo
>
>   
> ------------------------------------------------------------------------
>
> _______________________________________________
> CppAD mailing list
> CppAD at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/cppad
>   



More information about the CppAD mailing list