[Coin-lpsolver] FYI: overcommit memory on linux

Matthew Guthaus mguthaus at eecs.umich.edu
Sat Sep 3 16:19:20 EDT 2005


Hi all,

I ran into an interesting problem running CLP on GNU/Linux. It's  
related to memory allocation with "new" and an OS option called  
overcommit memory. What this option says is that when new is called  
and there is not enough memory, proceed anyways in hopes that the  
memory will become available later. The problem is that, particularly  
in numerical programs, this may not be realistic. When the CLP goes  
to actually use the memory, it will fail and you get strange memory  
problems (seg faults, etc.). Basically, there is no way to actually  
check if memory is properly allocated with this option on. When in  
doubt, turn it off. There are many threads on the linux kernel  
mailing list about this.  Here is a snippet from "man malloc"  
describing the issue:

BUGS
        By default, Linux follows an optimistic memory allocation  
strategy.  This means that when malloc() returns non-NULL there is  
no  guarantee  that the  memory really is available. This is a really  
bad bug.  In case it turns out that the system is out of memory, one  
or more processes will be killed by the infamous OOM killer.  In case  
Linux is employed under circumstances where it would be less  
desirable to suddenly  lose  some  randomly  picked processes, and  
moreover the kernel version is sufficiently recent, one can switch  
off this overcommitting behavior using a command like
               # echo 2 > /proc/sys/vm/overcommit_memory



-----------------------------------------------
Matthew Guthaus
mguthaus at eecs.umich.edu
http://homepage.mac.com/mguthaus






More information about the Clp mailing list