[Coin-discuss] Running out of memory in large problems
acw at ascent.com
acw at ascent.com
Mon Sep 25 12:04:15 EDT 2006
We are happily using Cbc in a scheduling and planning application. We run
mostly in a Windows environment.
Recently, as the size of our problems increased, we have started running
into what we take to be memory problems. It may turn out that these
problems are simply too big, in which case we'll look for simplifications.
But perhaps there is an easier answer, and we're hoping that the
collective wisdom of COIN-Discuss can come to our rescue.
The problems in question have on the order of 80 K variables, and
something like 300 K constraints. Cbc fails dramatically, throwing an
exception and exiting abnormally. (Because we are running Cbc via JNI
inside a thin Java interface layer, it's hard for us to see the details of
the exception, but we are working on that.)
We've duplicated the failure with artificial test problems with as few as
20 K variables and about 265 K constraints, with less than 8 M nonzero
coefficients.
The failure occurs during the call to CbcModel::initialSolve that precedes
initial cut generation; that is, it happens very early, well before the
branch-and-cut phase properly begins. We narrowed it down to
ClpSimplex::dual. In the seconds prior to failure, we observe memory
usage climbing sharply.
We hypothesize that Clp is unpacking the problem in some way; clearly not
to a full (non-sparse) matrix representation, but rather to some
intermediate form.
It's possible that many of our constraints are redundant; we're
investigating that ourselves. But perhaps we don't need to make the
effort: maybe there is some way to get COIN to prune the redundant
constraints cautiously, without inflating the matrix? Thanks in advance
for any advice that you might have.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/coin-discuss/attachments/20060925/286af604/attachment.html>
More information about the Coin-discuss
mailing list