[Dip] Issues related to Dippy source code

Ted Ralphs ted at lehigh.edu
Fri Nov 6 18:38:36 EST 2015


On Sat, Oct 24, 2015 at 8:36 AM, Marcus Kaiser <marcus.kaiser at mytum.de>
wrote:

> Hello Dip/Dippy Community,
> I am currently working on my master's thesis in mathematics. First of all,
> I want to express my appreciation for the efforts of the Dip team. I'm
> happy to be able to rely on such a comprehensive framework. To implement a
> Dantzig-Wolfe decomposition with custom initial solution and custom solver
> for the subproblems, I used the Dippy package in Python. During the
> process, I stumbled upon some lines of code in Dippy that I wonder about.
> The version that I run is 0.91.5 compiled with MSVC 2015 for Python 2.7.10
> x64 on Windows. Even though I have not yet been able to use 0.92.1, the
> following comments refer to its source.
>
>    1. DippyPythonUtils.cpp, line 274: The methodÂ
>    pyColDict_AsPackedArrays calls getVarType on a Python object of type pulp.LpVariable
>    which I found to have no such method. This produces an error when I
>    try to supply an initial solution. As the result of the call to
>    getVarType is not used anyway, maybe the line can be deleted.
>
> Hmm, the result of this call does not seem to be used anywhere, so I don't
see why it can't be deleted. I will do some testing.

>
>    1. DippySolve.cpp, line 31: The definition of DecompInf leads to a
>    compiler error as it is also globally defined in DecompAlgo.cpp, line 43.
>    It does not seem to be used in DippySolve.cpp, though.
>
> Yes, I see what's happening. It was working fine on Windows in 0.92.0, but
them my fix for OS X broke the Windows build in 0.92.1. Everything works
fine on OS X and Linux because we are using shared libraries. The long-term
fix just involves making DecompInf a parameter rather than a global
variable, but I just haven't gotten around to implementing that yet. For
now, I would suggest just sticking to 0.92.0 on Windows. There is a binary
on Pypi as well if you want to avoid the pain of building the whole thing:

https://pypi.python.org/pypi?:action=display&name=coinor.dippy&version=1.92.0


>
>    1. DippyDecompAlgo.cpp, line 47: The assertion in this line does not
>    allow the fallback to the default implementation of
>    DecompAlgo::chooseBranchSet to return false. This yields an error if
>    the column generation is stopped due to lacking improvement and
>    chooseBranchSet is called for an integer solution. In general, the
>    usage of chooseBranchSet in AlpsDecompTreeNode::chooseBranchingObject
>    (AlpsDecompTreeNode, line 379) suggests that the return value false would
>    also be acceptable. However, I could not figure out what happens to a node
>    if this is the case and the status is set to AlpsNodeStatusEvaluated.
>    Does it proceed with column generation? Maybe, the failure of
>    chooseBranchSet should be allowed in Dippy and be propagated to DIP.
>
> I guess I need a little more detail about what's happening to investigate.
Is this something you're actually observing? Or you're just theorizing? As
you observed, it is possible that node processing is halted and the status
of the node gets set to AlpsNodeStatusEvaluated. In this case, it is put
back in the queue with an updated bound and will simply stays there until
it is chosen again, at which time processing will continue.

I haven't looked in detail at when this happens or what exactly happens
when the processing resumes. It could very well be that the reason is what
you say---Dip stops processing because of tailoff, but the solution is
integer. In fact, thinking about it, it may be possible for Dip to get
stuck in a loop where it keeps re-processing the same node over and over,
but I guess since it is forced to do at least one iteration each time, it
should converge eventually, albeit slowly. In practice, I have never seen
this.

In any case, I think you may be right that we should remove this assert and
pass the failure in branching through to Dip. However, I have never
observed a problem with this in practice either. I will think about it.

>
>    1. dippy.py, line 852 sqq.: As I understand the cost and reduced cost
>    for a column is computed here if relaxed_solver returns a dictionary
>    for the variable values without including those corresponding costs. The
>    value of target, however, is not taken account of in this calculation.
>    This would mean incorrect testing for negative reduced costs in my opinion.
>    To what extent are the cost values returned by DipProblem.solveRelaxed used
>    by DIP?
>
> Dip doesn't really use the values internally. It recalculates everything,
taking target into account. For this reason, I removed the requirement for
the user to calculate these costs, but for backward compatibility, I left
the internal API unchanged and the values are just calculated in dippy.py
if the user doesn't calculate them. Eventually, this could all be removed.
The user doesn't really need to understand what "reduced cost" even means.
All the user should need to know is that they are supposed to be looking
for a solution with value less than "target".

>
>    1. I took from a thread of the mailing list in July that there are
>    cases where DIP solves the subproblem as MILP. Is it possible to prevent
>    DIP from doing so completely?
>
> Yes, it is in 0.92.*. As long as you eventually solve the problem to
optimality and return that status to Dip, then it will not use the MILP
solver. It only does that if you don't return a solution and you don't tell
it that your subproblem solver is exact. This is not true for earlier
versions of Dip, though.


>
>    1. Is returning an optimal solution with an appropriate status from relaxed_solver
>    enough to do so?
>
> Yes, exactly.

>
>    1. Is it valid to return a set of several solutions containing an
>    optimal as well as many non-optimal solutions while specifing the status as
>    optimal?
>
> It should be, yes.

>
>    1. The reason why I would like to know is that my subproblems are
>    quite large when formulated as MILP. Therefore, at the moment I provide DIP
>    with dummy MILPs for the subproblems which serve the only purpose of
>    establishing the assignment of variables to their subproblem. They do not
>    make any sense, though. That is why I would like to make sure that DIP
>    relies on the custom solveRelaxed method only and does not use more
>    information from the MILP formulation than the occurence of variables.
>
> Hmm, that is an interesting approach. I have to admit that the possibility
of doing this didn't really occur to me. I'm a little surprised it works,
since in the design of Dip, we sort of assume that Dip has access to a
description of the subproblem as an MILP internally. I actually had on my
TODO list to think about the possibility of using Dip in a mode where that
assumptions is not true. It is certainly possible when using the branch and
cut algorithm (see the TSP example), but I thought it would break in the
branch and price case. What you're doing makes sense, though. We could
actually make it a bit less clunky by allowing the user just to specify the
number of subproblems and the number of variables in each subproblem,
without having to specify a formulation for each block. Internally, we
could then make dummy blocks, as you have done.

Interesting! Keep me posted on progress. I'd be happy to think about
modifying Dip to support this or helping you to do it :).

Cheers,

Ted
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/dip/attachments/20151106/5bf3b8cf/attachment.html>


More information about the Dip mailing list