[Clp] question about performance on a strange problem

John Forrest john.forrest at fastercoin.com
Tue Dec 4 09:03:13 EST 2018


Ivo,

It may not be possible to do much, but here are a few suggestions.

First make sure you have dense factorization switched on.  On a typical 
problem run for a short while with -loglevel 3.  You should see output 
like -

length of U 24097, length of L 23279 plus 92416 from 304 dense rows

If not then make sure you get Lapack and Blas.

You can do better using openblas - add --disable-blas to configure and 
add -DCLP_USE_OPENBLAS=1 to CXXDEFS (If dense part very large might even 
be better to do DCLP_USE_OPENBLAS=4 to use 4 cpus).

If you can send me a smaller version which shows some of the same 
characteristics, I can look at it and see if there are other 
approaches.  Do you need an exact solution?


John Forrest


On 02/12/2018 22:08, Ivo Stefanov wrote:
> Hello,
> I am trying to use Clp to solve a linear problem, but it seems to take 
> too long (I have not actually received a solution in several days).
> The problem is not too big (~30 000 variables and about the same 
> number of constraints), but it has a fairly large dense square block 
> of about 10 000 x 10 000 variables.
> Because of this block the thing takes huge amount of memory, but 
> that's manageable for now - the problem is that I don't seem to be 
> getting anywhere near a solution.
>
> I have tried both the primal() and dual() algorithms with no success.
> I know it is hard to tell anything without the exact problem being 
> available, but since the .lp file is 700mb+ I do not have it uploaded 
> anywhere at the moment (and it takes quite a lot time to load anyways).
> What I have noticed so far while working with this problem is that the 
> performance of the dual algorithm gets worse with the increase of the 
> size of the dense block, however not always in the same way.
> Certain types of input data may increase the block in terms of rows 
> (number of constraints) and others - in terms of columns (number of 
> variables). The increase in columns seems to be the more problematic 
> part (for example, 10 000 x 500 is fairly trivial, while 10 000 x 10 
> 000 is impossible so far; on the other hand, 50 000 x 500 is still 
> solve-able in a very reasonable timeframe - faster than, for example, 
> 10 000 x 2500).
>
> I am wondering which one of the 2 options is more likely to be correct:
> 1. I am using the solver improperly and there is a way to actually 
> have this passing much faster.
> 2. I should focus on ways to reformulate the problem (maybe find a way 
> to break the dense block into something more sparse?) rather than 
> trying to fine-tune the settings of the solver.
>
> Any tips on the solver usage for such a problem will be greatly 
> appreciated.
>
> Thank you very much!
>
> _______________________________________________
> Clp mailing list
> Clp at list.coin-or.org
> https://list.coin-or.org/mailman/listinfo/clp


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/clp/attachments/20181204/519d8f1d/attachment.html>


More information about the Clp mailing list