[Coin-discuss] SYMPHONY speedup

Ted Ralphs ted at lehigh.edu
Tue Apr 28 03:05:37 EDT 2009


Since these questions come up frequently, I'd like to add a little
detail to what's already been said. First, I'm not sure it's even
possible to make the general statement that the "algorithmic quality"
of a solver is more important than parallelization (if one can even
define a relevant notion of "algorithmic quality"). With regard to the
potential gains from parallelization, there are some limited things
you can say if you have some knowledge of the properties of the
problems you would like to solve. For classes of problems that tend to
generate large, well-balanced trees (perhaps due to symmetry, for
instance) and for which the processing time of an individual node is
relatively short, it may be possible to reliably achieve near-linear
speed-up (depending on the number of processors). For other classes,
such as difficult set-partitioning models, it can be virtually
impossible to achieve good speed-up because processing the first few
levels of the tree (a largely sequential operation) is simply too
time-consuming. Chances are good that an individual user's experience
will be somewhere in between these two extremes.

Another factor that can be very important is the architecture on which
execution takes place. SYMPHONY can be built for both
distributed-memory and shared-memory execution. The performance is
likely to be very different in these two cases. In shared-memory mode
on an SMP, SYMPHONY's parallel performance will likely be better than
on a distributed-memory architecture with the same number of
processors. Execution on a multi-core architecture will be less
efficient than on a true SMP due to additional resource limitations.
Of course, the number of processors is a big factor as well.
SYMPHONY's master-slave implementation probably will  not scale well
beyond a few dozen processors, whereas BLIS, which is the parallel
MILP solver that is available as part of the CHiPPS project
(https://projects.coin-or.org/CHiPPS), is targeted more towards
massive parallel architectures.

On top of that is the question of "algorithmic quality" raised by
Tobias. This phrase is perhaps a little misleading, since quality has
to be judged with respect to a particular target test bed. If the
target test bed is unstructured, standard MILP instances solved one at
a time on a single processor, then it is true that SYMPHONY does not
have as many of the bells and whistles as some of the other solvers
mentioned by Tobias. SYMPHONY's strength is in solving difficult
combinatorial problems for which algorithmic customization is
important. It also has some specific features that other solvers lack,
such as basic warm-starting and sensitivity analysis capabilities for
situations in which a sequence of related MILPs must be solved and the
ability to handle multiple objectives.

Finally, there is parameter tuning. Most solvers endeavor to use
default parameter settings that work well for as wide a range of
problem types as possible. It's very likely, however, that you can see
performance gains simply by tuning the parameter settings of a
particular solver to work well for the instances you are interested
in. In this way, even "off-the-shelf" testing is not necessarily
representative of a solver's true capabilities.

Hopefully, this all gives you some idea that the situation is not very
simple. Each solver has its strengths and one must test in
representative operational situations (after appropriate parameter
tuning) to truly determine the appropriateness of a given solver.

If you are interested in reading further about parallel solution of
generic MILPs, we have just finished a tech report detailing our most
recent experiences with the BLIS parallel MILP solver mentioned above.
In this report, we try to paint as accurate a picture as possible of
what one can expect when trying to use a parallel MILP solver off the
shelf. The picture is anything but clear. The report will be available
soon, but I can send you an advance copy if you like.

I hope this helps you understand the difficulty in giving a clear
answer to your questions.

Cheers,

Ted

On Wed, Apr 22, 2009 at 6:16 PM, Tobias Achterberg <achterberg at zib.de> wrote:
> Hi Jack,
>
> for MILP, your question cannot be answered (LP is a different story). Parallel speedup and
> performance difference accross different solvers depend on the model.
>
> For example, there are models that can be solved by solver A in less than a second but
> that take forever by another solver B, just because solver B is lacking the important
> feature (like a cutting plane or a presolving reduction) to solve this particular model.
> And there may be other models for which the situation is reversed. Finally, there are
> models that can be solved in roughly the same time by all branch-and-cut MILP solvers.
>
> Overall, my experience is that the algorithmic quality of a solver is usually much more
> important than parallelization. This means, that often solvers like CBC, SCIP, Gurobi,
> XPress, or CPLEX will be faster than Symphony, even if you use 1000 CPUs for Symphony.
> But, as I said, some models do not require the sophisticated machinery that these solvers
> provide and parallelize well. In this setting, Symphony can be really great if you have
> access to a massively parallel computing environment.
>
> So, the short answer is: you just need to test it for your models. You cannot say in advance.
>
>
> Cheers,
>
> Tobias
>
> Jack Bryan wrote:
>> Hi ,
>>
>> I am a new user of SYMPHONY .
>>
>> I need to use it to solve very large mixed integer linear programming
>> models.
>>
>> So, I need to use PVM for SYMPHONY .
>>
>> But, I do not know how much speed up I can get from the parallel SYMPHONY ?
>>
>> For example, if it takes 100 seconds to solve a MILP model in CPLEX, how
>> long to solve it in SYMPHONY ?
>>
>>
>> thanks
>>
>> Jack
>>
>> April 21  2009
>>
>> ------------------------------------------------------------------------
>> Windows Live™ Hotmail®:…more than just e-mail. Check it out.
>> <http://windowslive.com/online/hotmail?ocid=TXT_TAGLM_WL_HM_more_042009>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Coin-discuss mailing list
>> Coin-discuss at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/coin-discuss
>
> _______________________________________________
> Coin-discuss mailing list
> Coin-discuss at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/coin-discuss
>



-- 
Dr. Ted Ralphs
Associate Professor, Lehigh University (permanent)
Visiting Professor, Monash University (current)
(610) 628-1280
ted 'at' lehigh 'dot' edu
coral.ie.lehigh.edu/~ted

Sent from Caulfield East, Victoria, Australia





More information about the Coin-discuss mailing list