<div dir="ltr">Hi Marcus,<div><br></div><div>Thanks very much for the detailed feedback! Sorry for the delay in responding. I had a quick look over your comments and they all seem reasonable. The implementation issues look like they can be fixed fairly quickly and easily. For the others, we need to have a more detailed look to see what can be done in the short and long run. In the next couple of days, I'll try to respond with detailed comments on each of your issues. Of course, if you want to provide patches for any of these, that would be most welcome :).</div><div><br></div><div>Cheers,</div><div><br></div><div>Ted</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 21, 2016 at 9:59 AM, Marcus Kaiser <span dir="ltr"><<a href="mailto:marcus.kaiser@mytum.de" target="_blank">marcus.kaiser@mytum.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
Hello Dip/py community,<br>
in my master's thesis im dealing with a Dantzig-Wolfe decomposition
and the resulting column generation. For the implementation I use
the DIP framework, version 0.92.2 on Windows 7 with MSVC 2013. I
appreciate that you provide such an extensive framework. While
working with it and debbuging my code the following things came to
my mind. Some of them are performance and implementation related
others address the functionality of the algorithms in DIP.
Additionally, I have a question about the treatment of branching
decisions.<br>
<br>
<i>[</i><i><i>Implementation</i>]</i> I found Dippy to suffer from <b>memory
leaking</b>. Since it interfaces with Python, it is responsible
for maintaning the reference counters of the Python objects it deals
with. This is done in some parts of the code, but by far not all. A
crucial situation which came to my attention is the method <tt>DippyDecompApp::solveRelaxed</tt>.
The retrieved columns are converted to objects of the class <tt>DecompVar</tt>,
yet the reference counter on the original Python object is not
decreased, which prevents them from being deleted. Hence, for large
problem instances the memory floods.<br>
<br>
<i>[Implementation]</i> The method <tt>AlpsDecompTreeNode::getBranchedVar
</tt>seems to be meant to return the <b>variable which is branched
on</b>. However, only one of the four vectors potentially holding
branching decisions is checked. I think both, lower bounds and upper
bounds, need to be checked for at least one branch, e.g. <tt>downBranchLB</tt>_
and <tt>downBranchUB_</tt>.<br>
<br>
<i>[</i><i>Implementation</i>] In <tt>DecompAlgo::processNode</tt>,
line 1855 the processing of a node is terminated if the <b>lower
bound meets the global upper bound</b>. It takes into account
numeric inexactness. The calling method <tt>AlpsDecompTreeNode::process
</tt>repeats this check in line 309 but in an exact fashion why the
node might not be fathomed when it should be. In my oppinion this
repeated condition should match the original one. Or maybe even
better, <tt>DecompAlgo::getStopCriteria</tt> could be used?<br>
<br>
<i>[Implementation]</i> The member <tt>DecompAlgo::m_relGap</tt> is
set in <tt>DecompAlgo::updateObjBound</tt> and used in <tt>DecompAlgo::isGapTight</tt>.
<tt>DecompAlgo::m_relGap</tt>, however, is not reset when entering <tt>DecompAlgo::processNode</tt>.
Therefore, it has an <b>invalid value</b> based on a node processed
before - probably representing a tight <b>gap</b>. I think, this
might lead to stopping the processing of a node immediately as it is
mistakenly believed to have a tight gap, cf. <tt>DecompAlgo::phaseUpdate</tt>,
line 4274. I suggest to reset <tt>DecompAlgo::m_relGap </tt>appropriately
or replace it completely by calls to <tt>DecompAlgo::getNodeLPGap</tt>.<br>
<br>
<i>[Functionality/Performance]</i> For the problem I consider, the
MILP formulation of the subproblems is quite huge. Therefore, I use
a dummy formulation and prevent DIP from looking at it by always
providing columns with negative reduced cost via my implementation
of <tt>DecompApp::solveRelaxed</tt> if they exist. I do so in a
two-step approach. In a first step the subproblem is solved
heuristically. If this does not lead to new columns an exact method
is applied. Since the exact method is costly, I would like to stick
to the heuristic as long as there are new columns for <i>some</i>
subproblem. This stands in contrast to the current interface of DIP
as it tries to solve subproblems without new columns using the MILP
formulation.<br>
Furthermore, the subproblems resemble each other for my problem.
This would allow to solve them in an accumulated fashion as far as
the reduced cost and branching decisions allow so. The latter is
certainly true for the root node of the branch-and-bound tree.
Hence, I would suggest to redesign the interface of DIP to enable a
solution process for <b>all the subproblems at once</b> if the user
provides it. Maybe the current treatment could act as a fallback.<br>
<br>
[<i>Functionality</i>] Based on the fact that my algorithm for the
subproblem is partly heuristic (see previous paragraphs), it is not
until the last iterations for solving a single node that it provides
not only feasible (<tt>DecompSolStatFeasible</tt>) but optimal
solutions (<tt>DecompSolStatOptimal</tt>). Thus, a lower bound for
the node can only be computed at the end of the solution process of
each node. This prevents me from using the <b>tailing-off mechanism</b>
provided by DIP (<tt>DecompAlgo::isTailoffLB</tt>) as it is based on
the lower bound. For that reason, I think it would be useful to
introduce an alternative tailing-off control based on the
progression of the upper bound for the relaxed problem. Would this
be a reasonable approach?<br>
<br>
<i>[Implementation/Functionality]</i> If a node is solved to
optimality in the pricing phase (<tt>PHASE_PRICE2</tt>) no more
columns are generated and the algorithm switches to <tt>PHASE_CUT</tt>
(cf. <tt>DecompAlgo::phaseUpdate</tt>, line 4215). It prevents
stopping on a tight gap as checked in <tt>DecompAlgo::phaseUpdate</tt>,
line 4274. Thus, the switch to the cutting phase it carried out.
However, there is a parameter called <tt>PCStrategy</tt>. Setting
it to<tt> favorPrice</tt> will make <tt>DecompAlgo::phaseUpdate</tt>
to immediately switch the phase from <tt>PHASE_CUT</tt> to <tt>PHASE_PRICE2</tt>
back again in line 4153 as long as the limit on the number of price
calls is not reached. This can result in alternation of the two
phases and hence an<b> infinite loop</b>. I found the remedy of
setting the <tt>RoundCutItersLimit</tt> to 0, which probably suits
my intention better. Yet, I wonder what the actual use of the <tt>PCStrategy</tt>
parameter is. At the moment it seems to be redundant as the <tt>RoundPriceItersLimit</tt>
and <tt>RoundCutItersLimit </tt>are the controlling paramerters.<br>
<br>
<i>[Performance/Implementation]</i> I found that the <b>checks for
duplicate and parallel columns</b> in DecompApp::addVarsToPool,
lines 5565 sqq. are quite expensive. First of all, I believe that
the check for parallel columns in line 5609 is redundant if the
parameter <tt>ParallelColsLimit</tt> is 1.0 (as pointed out in the
comment preceeding that line of code). Since this is the default
value of the parameter, I recommend checking for parallel columns
only if the parameter is smaller than 1.0. Apart from that in my
understanding of column generation, columns with negative reduced
cost cannot have been included before. This would render it
unnecessary to check for duplicates in the existing columns. Maybe
this is not true for some configurations like <tt>DualStab</tt>? As
mentioned in the comments in the code, the hashing of the columns is
not efficient at all. A comparison based on this hashes is even more
expensive than a direct comparison of the sparse representation. A
better hashing and conditional exact comparison would result in a
noticable speed-up, I suppose.<br>
<br>
<i>[Question]</i> Finally, I did not understand the usage of the
parameters <tt>BranchEnforceInMaster</tt> and <tt>BranchEnforceInSubProb</tt>
and would be grateful if you could explain the basic meaning of them
to me. They seem to control the treatment of the branching
decisions. <tt>BranchEnforceInMaster</tt> suggests to include the
decisions as new rows in the master problem and enforce them via
reduced costs, when <tt>BranchEnforceInSubProb</tt> suggests to
make it the subproblems' task to deal with them. In the latter case,
what happens to conditions which include master-only variables and
therefore cannot be treated in the subproblems? What is the best way
for getting the branching decisions for the current node when using
<tt>BranchEnforceInSubProb</tt>?<br>
<br>
Thank you in advance,<br>
Marcus<br>
</div>
<br>_______________________________________________<br>
Dip mailing list<br>
<a href="mailto:Dip@list.coin-or.org">Dip@list.coin-or.org</a><br>
<a href="http://list.coin-or.org/mailman/listinfo/dip" rel="noreferrer" target="_blank">http://list.coin-or.org/mailman/listinfo/dip</a><br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr">Dr. Ted Ralphs<br>Professor, Lehigh University<br>(610) 628-1280<br>ted 'at' lehigh 'dot' edu<br><a href="http://coral.ie.lehigh.edu/~ted" target="_blank">coral.ie.lehigh.edu/~ted</a><br></div></div>
</div>