[Coin-bcpdiscuss] BCP: node buffering?

Joerg Herbers Joerg.Herbers at inform-ac.com
Fri Mar 17 04:21:27 EST 2006


Francois,

thanks for the pointers. Cuts didn't pose any problems (I don't use any), but the deletion of user data for processed nodes seems to be a good idea. It reduces memory requirements considerably - below 2GB in my case.

Thanks again,
Jörg


>>> On 3/16/2006 at 6:21 pm, in message
<Pine.LNX.4.61.0603161205150.14512 at farkas.tepper.cmu.edu>, Francois Margot
<fmargot at andrew.cmu.edu> wrote:

> Joerg:
> 
> On Thu, 16 Mar 2006, Joerg Herbers wrote:
> 
>> I've got problems using BCP when very large search trees are explored. My 
> BCP-based algorithm then runs out of memory - which may partly be attributed to 
> the use of user data in the search nodes.
> 
> Normally, user data associated with explored nodes can be deleted in the
> tree manager (see COIN/Examples/BAC for an illustration), so only user data 
> associated with unexplored nodes matters.
> If the memory is filled because you get a large number of unexplored nodes
> in the queue, a quick fix is to change the exploration rule
> to depth first search or switch to it after a while.
> 
>>
>> By default, application memory is limited to 2GB on WinXP 32bit (see 
> http://www.brianmadden.com/content/content.asp?ID=69), but the situation is 
> not much better on Linux (32 bit). In fact, some gigabytes of memory are 
> quickly exhausted when exploring large search trees.
>>
> 
> Are you using cut generators? If yes, the problem might be a flaw (reported
> previously) in the memory management of the cuts, namely that copies
> of useless cuts are not deleted in the tree manager. To alleviate that,
> make sure that the following cut management parameters are set to
> their default values:
> 
> BCP_IneffectiveConstraints              2  // 0: keep all constraints
>                                             // 1: based on non zero slack
>                                             // 2: based on zero dual val
>                                             // Default: 2
> 
> BCP_IneffectiveBeforeDelete             1  // Number of iterations a cut
>                                             // is considered ineffective 
> before
>                                             // it is deleted
>                                             // Default: 1
> 
> In addition, some cut generators (CglGomory, CglRedSplit) allow you to 
> control the density of the generated cuts. Setting the density to a lower 
> level might help.
> 
> The real solution to this problem might be implemented soon.
> 
>> Has anybody thought about implementing a node buffer (similar to the CPLEX 
> node file) before? Is there interest in having one?
>>
> 
> I don't think anybody works on that, and having it would be a plus.
> However, it might not solve your problem if the culprit is cut management.
> 
> Francois





More information about the Coin-bcpdiscuss mailing list