[Clp] Sparse matrices

Matthew Saltzman mjs at clemson.edu
Wed Mar 7 20:51:24 EST 2012


On Wed, 2012-03-07 at 07:28 -0600, Miles Lubin wrote: 
> Dear Daniel,
> 
> Clp uses essentially the Suhl-Suhl approach for forming the sparse LU
> decomposition and the Forrest-Tomlin approach for updating the factors
> after each simplex iteration. This code is all in the
> CoinFactorization class in CoinUtils. If you take a look at that code,
> you may lose your desire to try to parallelize it ;). Forming a sparse
> LU factorization on a GPU will likely be very difficult, but if you do
> manage to do that, for updates I would suggest using as a starting
> point the so-called product-form update that you can find in any
> textbook; with these, you can treat the LU factorization as a black
> box.

The product form that appears in most textbooks is a product form of the
explicit inverse.  Descriptions of the product form of the LU
factorization is less common, but it appears in Chvatal's _Linear
Programming_ (Freeman, 1984 IIRC).  That's better for sparsity and
stability, but not sure how well it maps to GPUs.  FTRAN and BTRAN
require sequences of eta-matrix back-solves rather than
eta-matrix-vector multiplies.

The Cholesky factorization is the basis for the linear algebra of
interior-point methods, not simplex.  The book by George and Liu on
sparse matrix computation is a good place to start.  That approach
probably does lend itself better to GPU implementations.

> 
> Are you aware of the work of Iain Dunning (https://github.com/IainNZ/LPG)?
> 
> Regards,
> Miles
> 
> On Mon, Mar 5, 2012 at 9:49 AM, Daniel Herrero <dani_herrero at yahoo.es> wrote:
> > Hello everybody:
> >
> > I'm researh at Politechnic University of Madrid. I've been working on
> > solving LP problems with Nvidia GPU's using parallel methods.
> >
> > I'm starting now with sparse matrices, and I have read Suhl & Shul papers
> > form 1990 and 1993. The results are much more time consuming than what I
> > hope, for example a problem with 3500 constraints, 9625 continous variables
> > and 90% sparsity, as is described in those papers, I spend 250 seconds using
> > the usual Simplex method, and about 2500 seconds using the LU decomposition
> > because the matrix multiplication is much more time consuming than row
> > updates.
> >
> > I'm not trying to parallelize the problem yet, just trying to understand the
> > methodology of LU decomposition. Could you tell me what are you using for
> > doing this?, I've read that you use Cholesky, but I'm a little lost and
> > maybe not taking into account something important.
> >
> > Thank you so much
> >
> > Daniel Herrero Giner
> >
> > _______________________________________________
> > Clp mailing list
> > Clp at list.coin-or.org
> > http://list.coin-or.org/mailman/listinfo/clp
> >
> _______________________________________________
> Clp mailing list
> Clp at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/clp

-- 
Matthew Saltzman
Clemson University Mathematical Sciences
mjs AT clemson DOT edu





More information about the Clp mailing list