[Ipopt] Request for comment: any successful uses of MPI parallel MUMPS with Ipopt?

Greg Horn gregmainland at gmail.com
Wed Oct 1 07:51:09 EDT 2014

Hi Miles,

I'm glad to hear this, and I agree that MUMPS+MPI users can compile on
their own.

I'm starting the process to switch to sequential MUMPS (
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=763612). If anyone is
opposed to this please speak up soon!

On Tue, Sep 30, 2014 at 5:30 PM, Miles Lubin <miles.lubin at gmail.com> wrote:

> Hi Greg,
> As the maintainer of a Clp, Cbc, and Symphony in debian, my vote is for
> moving to serial MUMPS. The benefits for the majority of users in terms of
> not needing to deal with MPI-related issues, which can make the Ipopt
> package unusable, outweigh any dubious gains that might be achieved by
> using parallel MUMPS (as Damien's experience shows). Anyone who's
> experimenting with MUMPS+MPI can surely compile Ipopt on their own.
> Miles
> On Tue, Sep 30, 2014 at 5:27 AM, Greg Horn <gregmainland at gmail.com> wrote:
>> Hi all,
>> Just to clarify, we are considering making the debian and ubuntu ipopt
>> packages (apt-get install coinor-libipopt) use serial MUMPS instead of
>> parallel. We don't know of anyone exploiting the MPI parallelism. So speak
>> up if you use parallel MUMPS or if you know anyone who does!
>> On Tue, Sep 30, 2014 at 10:20 AM, Tony Kelman <kelman at berkeley.edu>
>> wrote:
>>>   This is an issue that has come up a few times before on the list,
>>> usually
>>> with respect to distribution/Homebrew packages wanting to link Ipopt to
>>> the MPI version of Mumps. I would like to get feedback from anyone on the
>>> list who can provide a counterexample to my understanding of current
>>> functionality. The question is:
>>> Has anyone successfully used the MPI parallel version of MUMPS in
>>> combination with Ipopt? Specifically I’m asking about the mainline
>>> supported release versions of Ipopt, not the work that happened several
>>> years ago in the experimental distributed-memory parallel branch
>>> https://projects.coin-or.org/Ipopt/browser/branches/parallel . I know
>>> it’s possible to use the MPI version of MUMPS with a single process just
>>> fine, or use MPI to parallelize user function evaluations, or run
>>> separate
>>> completely independent instances of Ipopt solving different optimization
>>> problems simultaneously. What I’m asking about is running the MUMPS
>>> linear
>>> solver across multiple processes with MPI and achieving measured parallel
>>> speedup on a single optimization problem.
>>> I have done this for the multithreaded solvers WSMP/MA86/MA97/Pardiso,
>>> or to a limited extent via multithreaded BLAS, but never personally with
>>> MPI MUMPS. I’d like to find out whether it has been done before, in case
>>> it is possible but just complicated to do. If you’ve tried to do this but
>>> failed, I'd be interested to hear how far you got.
>>> Parts of Ipopt outside of MUMPS are not MPI-aware, so running multiple
>>> copies of an ipopt executable with mpirun will probably duplicate all
>>> computations outside of the linear solver. I’m not actually sure what
>>> would
>>> happen if each copy of ipopt thinks it’s solving an entire optimization
>>> problem but MUMPS is trying to decompose the linear system over MPI.
>>> I’d be surprised if it worked, but also happy to be proven wrong here.
>>> Thanks in advance,
>>> Tony
>>> _______________________________________________
>>> Ipopt mailing list
>>> Ipopt at list.coin-or.org
>>> http://list.coin-or.org/mailman/listinfo/ipopt
>> _______________________________________________
>> Ipopt mailing list
>> Ipopt at list.coin-or.org
>> http://list.coin-or.org/mailman/listinfo/ipopt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.coin-or.org/pipermail/ipopt/attachments/20141001/f12a8803/attachment-0001.html>

More information about the Ipopt mailing list