[Dip] Dip/py code issues, suggestions for additional functionality and question about branching decisions
Marcus Kaiser
marcus.kaiser at mytum.de
Mon Mar 21 09:59:37 EDT 2016
Hello Dip/py community,
in my master's thesis im dealing with a Dantzig-Wolfe decomposition and
the resulting column generation. For the implementation I use the DIP
framework, version 0.92.2 on Windows 7 with MSVC 2013. I appreciate that
you provide such an extensive framework. While working with it and
debbuging my code the following things came to my mind. Some of them are
performance and implementation related others address the functionality
of the algorithms in DIP. Additionally, I have a question about the
treatment of branching decisions.
/[///Implementation/]/ I found Dippy to suffer from *memory leaking*.
Since it interfaces with Python, it is responsible for maintaning the
reference counters of the Python objects it deals with. This is done in
some parts of the code, but by far not all. A crucial situation which
came to my attention is the method DippyDecompApp::solveRelaxed. The
retrieved columns are converted to objects of the class DecompVar, yet
the reference counter on the original Python object is not decreased,
which prevents them from being deleted. Hence, for large problem
instances the memory floods.
/[Implementation]/ The method AlpsDecompTreeNode::getBranchedVar seems
to be meant to return the *variable which is branched on*. However, only
one of the four vectors potentially holding branching decisions is
checked. I think both, lower bounds and upper bounds, need to be checked
for at least one branch, e.g. downBranchLB_ and downBranchUB_.
/[//Implementation/] In DecompAlgo::processNode, line 1855 the
processing of a node is terminated if the *lower bound meets the global
upper bound*. It takes into account numeric inexactness. The calling
method AlpsDecompTreeNode::process repeats this check in line 309 but in
an exact fashion why the node might not be fathomed when it should be.
In my oppinion this repeated condition should match the original one. Or
maybe even better, DecompAlgo::getStopCriteria could be used?
/[Implementation]/ The member DecompAlgo::m_relGap is set in
DecompAlgo::updateObjBound and used in DecompAlgo::isGapTight.
DecompAlgo::m_relGap, however, is not reset when entering
DecompAlgo::processNode. Therefore, it has an *invalid value* based on a
node processed before - probably representing a tight *gap*. I think,
this might lead to stopping the processing of a node immediately as it
is mistakenly believed to have a tight gap, cf. DecompAlgo::phaseUpdate,
line 4274. I suggest to reset DecompAlgo::m_relGap appropriately or
replace it completely by calls to DecompAlgo::getNodeLPGap.
/[Functionality/Performance]/ For the problem I consider, the MILP
formulation of the subproblems is quite huge. Therefore, I use a dummy
formulation and prevent DIP from looking at it by always providing
columns with negative reduced cost via my implementation of
DecompApp::solveRelaxed if they exist. I do so in a two-step approach.
In a first step the subproblem is solved heuristically. If this does not
lead to new columns an exact method is applied. Since the exact method
is costly, I would like to stick to the heuristic as long as there are
new columns for /some/ subproblem. This stands in contrast to the
current interface of DIP as it tries to solve subproblems without new
columns using the MILP formulation.
Furthermore, the subproblems resemble each other for my problem. This
would allow to solve them in an accumulated fashion as far as the
reduced cost and branching decisions allow so. The latter is certainly
true for the root node of the branch-and-bound tree. Hence, I would
suggest to redesign the interface of DIP to enable a solution process
for *all the subproblems at once* if the user provides it. Maybe the
current treatment could act as a fallback.
[/Functionality/] Based on the fact that my algorithm for the subproblem
is partly heuristic (see previous paragraphs), it is not until the last
iterations for solving a single node that it provides not only feasible
(DecompSolStatFeasible) but optimal solutions (DecompSolStatOptimal).
Thus, a lower bound for the node can only be computed at the end of the
solution process of each node. This prevents me from using the
*tailing-off mechanism* provided by DIP (DecompAlgo::isTailoffLB) as it
is based on the lower bound. For that reason, I think it would be useful
to introduce an alternative tailing-off control based on the progression
of the upper bound for the relaxed problem. Would this be a reasonable
approach?
/[Implementation/Functionality]/ If a node is solved to optimality in
the pricing phase (PHASE_PRICE2) no more columns are generated and the
algorithm switches to PHASE_CUT (cf. DecompAlgo::phaseUpdate, line
4215). It prevents stopping on a tight gap as checked in
DecompAlgo::phaseUpdate, line 4274. Thus, the switch to the cutting
phase it carried out. However, there is a parameter called PCStrategy.
Setting it tofavorPrice will make DecompAlgo::phaseUpdate to immediately
switch the phase from PHASE_CUT to PHASE_PRICE2 back again in line 4153
as long as the limit on the number of price calls is not reached. This
can result in alternation of the two phases and hence an*infinite loop*.
I found the remedy of setting the RoundCutItersLimit to 0, which
probably suits my intention better. Yet, I wonder what the actual use of
the PCStrategy parameter is. At the moment it seems to be redundant as
the RoundPriceItersLimit and RoundCutItersLimit are the controlling
paramerters.
/[Performance/Implementation]/ I found that the *checks for duplicate
and parallel columns* in DecompApp::addVarsToPool, lines 5565 sqq. are
quite expensive. First of all, I believe that the check for parallel
columns in line 5609 is redundant if the parameter ParallelColsLimit is
1.0 (as pointed out in the comment preceeding that line of code). Since
this is the default value of the parameter, I recommend checking for
parallel columns only if the parameter is smaller than 1.0. Apart from
that in my understanding of column generation, columns with negative
reduced cost cannot have been included before. This would render it
unnecessary to check for duplicates in the existing columns. Maybe this
is not true for some configurations like DualStab? As mentioned in the
comments in the code, the hashing of the columns is not efficient at
all. A comparison based on this hashes is even more expensive than a
direct comparison of the sparse representation. A better hashing and
conditional exact comparison would result in a noticable speed-up, I
suppose.
/[Question]/ Finally, I did not understand the usage of the parameters
BranchEnforceInMaster and BranchEnforceInSubProb and would be grateful
if you could explain the basic meaning of them to me. They seem to
control the treatment of the branching decisions. BranchEnforceInMaster
suggests to include the decisions as new rows in the master problem and
enforce them via reduced costs, when BranchEnforceInSubProb suggests to
make it the subproblems' task to deal with them. In the latter case,
what happens to conditions which include master-only variables and
therefore cannot be treated in the subproblems? What is the best way for
getting the branching decisions for the current node when using
BranchEnforceInSubProb?
Thank you in advance,
Marcus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/dip/attachments/20160321/00f4b7f0/attachment.html>
More information about the Dip
mailing list