[Coin-discuss] Running out of memory in large problems
John J Forrest
jjforre at us.ibm.com
Mon Sep 25 12:33:37 EDT 2006
My best guess would be that the factorization is getting dense. If you
allow more output and set the Clp log level to 3 or greater you should see
messages like
length of U nnnnn, length of L nnnnn
If these increase dramatically then that is probably the problem.
Theoretically with 20K variables you could have a factorization taking 400
million doubles and even more in associated lists. It probably is not
that bad but .... If you use Lapack then DENSE_CODE will be set in
various CoinFactorization files and then you would not get the associated
data and it would be much faster.
If that is not the case then feel free to send me the artificial problem
to see what the problem is.
John Forrest
acw at ascent.com
Sent by: coin-discuss-bounces at list.coin-or.org
09/25/2006 12:04 PM
Please respond to
Discussions about open source software for Operations Research
<coin-discuss at list.coin-or.org>
To
coin-discuss at list.coin-or.org
cc
Subject
[Coin-discuss] Running out of memory in large problems
We are happily using Cbc in a scheduling and planning application. We run
mostly in a Windows environment.
Recently, as the size of our problems increased, we have started running
into what we take to be memory problems. It may turn out that these
problems are simply too big, in which case we'll look for simplifications.
But perhaps there is an easier answer, and we're hoping that the
collective wisdom of COIN-Discuss can come to our rescue.
The problems in question have on the order of 80 K variables, and
something like 300 K constraints. Cbc fails dramatically, throwing an
exception and exiting abnormally. (Because we are running Cbc via JNI
inside a thin Java interface layer, it's hard for us to see the details of
the exception, but we are working on that.)
We've duplicated the failure with artificial test problems with as few as
20 K variables and about 265 K constraints, with less than 8 M nonzero
coefficients.
The failure occurs during the call to CbcModel::initialSolve that precedes
initial cut generation; that is, it happens very early, well before the
branch-and-cut phase properly begins. We narrowed it down to
ClpSimplex::dual. In the seconds prior to failure, we observe memory
usage climbing sharply.
We hypothesize that Clp is unpacking the problem in some way; clearly not
to a full (non-sparse) matrix representation, but rather to some
intermediate form.
It's possible that many of our constraints are redundant; we're
investigating that ourselves. But perhaps we don't need to make the
effort: maybe there is some way to get COIN to prune the redundant
constraints cautiously, without inflating the matrix? Thanks in advance
for any advice that you might have.
_______________________________________________
Coin-discuss mailing list
Coin-discuss at list.coin-or.org
http://list.coin-or.org/mailman/listinfo/coin-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/coin-discuss/attachments/20060925/e3bcbbc1/attachment.html>
More information about the Coin-discuss
mailing list