[Coin-lpsolver] St9bad_alloc from executable

Matthew Saltzman mjs at ces.clemson.edu
Wed Aug 3 13:01:46 EDT 2005


On Wed, 3 Aug 2005, John J Forrest wrote:
>
> Adam,
>
> That is a very dense problem and you may run into accuracy problems anyway.
> Does anyone know if 32 bit Linux allows more than 2gb a process?

In Red Hat Enterprise, the per-process limits were 3GB (in v2.1) and 
approx. 4GB (in later versions).  Also see the ulimit shell command for 
managing individual user limits.

>
> I could try catching some of these memory allocation exceptions but at
> present I do not.  Are you using presolve?  If not see if it makes problem

The problem with catching memory allocation errors in Linux is that you 
basically can't.  AIUI, Linux uses a lazy allocation strategy, in which 
the malloc() will succeed even if there is not currently free memory to 
satisfy it (modulo there being potentially enough free memory to satisfy 
the request later).  Then pages are supplied to the process as needed. 
The process only fails to get memory when it attempts to write to it.

This policy has its bad points (obviously) and its good points.  But we're 
stuck with it.

> significantly smaller.  If it does not make much smaller then make sure you
> do not use it as that will be taking a lot of memory.  If it helps to some
> extent then try presolving to file.
>
> Otherwise look at line 612 of CoinFactorization1.cpp.  Clp may be thinking
> it needs more memory but it may be over-estimating what it needs so you
> could let it print that message (and maybe values of lengthAreaU_ and L_).
> If areaFactor is increasing you could try using something like
> CoinMin(2.0,areaFactor) rather than areaFactor_ itself.
>
> I doubt whether any of that will work but it may be worth a try.  Probably
> the best way of solving your problem would be to use an interior point
> method - you could try Clp after downloading wssmp or a similar high
> quality cholesky code.  But again you may need too much memory.
>
> Sorry I can not help more but you are at the boundary of what is possible.
>
> John Forrest
>
>
>
>             "Adam Smith"
>             <adam at viratech.co
>             m>                                                         To
>             Sent by:
>             coin-lpsolver-bou         <coin-lpsolver at list.coin-or.org>
>             nces at list.coin-or                                          cc
>             .org
>
>
>             08/02/2005 08:11
>             PM
>                                                                   Subject
>                                       [Coin-lpsolver] St9bad_alloc from
>                                       executable
>
>
>
>
>
>
>
>
>
>
> Hello All,
>
> I have a problem with about 60 million nonzero entries (roughly 2e5 by
> 2e5).
> I tried solving it using the out-of-the-box Clp executable, running on
> Fedora Core 4.  I got the following message after roughly 10 hours:
>
> terminate called after throwing an instance of 'std::bad_alloc'
>  what():  St9bad_alloc
> Aborted
>
> I tried it on two different computers, both with 2GB of RAM, and got that
> error at the same point in both (in terms of the objective value).  I tried
> it on a computer with 4GB of RAM, and the execution got just *slightly*
> further (not twice as far!).
>
> Unfortunately, I'm not very used to linux, much less debugging via ssh!
>
> Any hints?  Is it hitting an address space limit?  If so, why did doubling
> the memory only let it get slightly further?
>
> Thanks for any help.
>
> Best,
> Adam Smith
>
> _______________________________________________
> Coin-lpsolver mailing list
> Coin-lpsolver at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/coin-lpsolver
>
>
>
> _______________________________________________
> Coin-lpsolver mailing list
> Coin-lpsolver at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/coin-lpsolver
>

-- 
 		Matthew Saltzman

Clemson University Math Sciences
mjs AT clemson DOT edu
http://www.math.clemson.edu/~mjs



More information about the Clp mailing list