<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
A few years ago I wrapped the MPI version of MUMPS in an interface
that makes it look like a regular in-process solver. The wrapper
spawns the MPI universe outside of the calling process, and uses
Boost's Interprocess library to send the numbers back and forth
between the main solver process and the MPI workers. It was cool
technically, but the performance was pretty ordinary, either in
IPOPT or in any other solver. The problem is that you have the
processes spinning up and spinning down as the MPI layer runs, then
the solver, then MPI, then the solver, and so on. It was hard to
make everything synchronise and it need thread yielding calls to
work. Performance-wise it was a B-minus at the best. With the
recent versions of Intel Pardiso working well with IPOPT, I stopped
development, because even though Pardiso took more iterations the
wall-clock time was faster.<br>
<br>
You can wrap all of IPOPT in MPI so everything is in-process. You
would only have the master MPI process do any IPOPT calls, and they
can all work with MUMPS. This would give you the speed advantages
of MPI MUMPS all in one MPI universe that's always on and running.
It's difficult to link that MPI-IPOPT to any other software though,
because it then runs inside MPI (just like MPI MUMPS and you're back
where you started...).<br>
<br>
The MUMPS developers are working on MUMPS 5.0, which is more or less
a rewrite and will incorporate OpenMP and MPI. Like all rewrites,
it's behind schedule :-) but the performance figures are pretty good
on the OpenMP version. In the medium to long run, that's the best
pathway for easy open-source parallelism in IPOPT.<br>
<br>
Cheers,<br>
<br>
Damien<br>
<br>
On 2014-09-30 3:27 AM, Greg Horn wrote:<br>
<blockquote
cite="mid:CAAr-h4sasQjyj0RvhvV8oiRHUxozUu-pWzqui8L-6tWiRsFrig@mail.gmail.com"
type="cite">
<div dir="ltr">Hi all,
<div><br>
</div>
<div>Just to clarify, we are considering making the debian and
ubuntu ipopt packages (apt-get install coinor-libipopt) use
serial MUMPS instead of parallel. We don't know of anyone
exploiting the MPI parallelism. So speak up if you use
parallel MUMPS or if you know anyone who does!</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 30, 2014 at 10:20 AM, Tony
Kelman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:kelman@berkeley.edu" target="_blank">kelman@berkeley.edu</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div
style="FONT-SIZE:12pt;FONT-FAMILY:'Calibri';COLOR:#000000">
<div>This is an issue that has come up a few times
before on the list, usually</div>
<div>with respect to distribution/Homebrew packages
wanting to link Ipopt to</div>
<div>the MPI version of Mumps. I would like to get
feedback from anyone on the</div>
<div>list who can provide a counterexample to my
understanding of current</div>
<div>functionality. The question is:</div>
<div> </div>
<div>Has anyone successfully used the MPI parallel
version of MUMPS in</div>
<div>combination with Ipopt? Specifically I’m asking
about the mainline</div>
<div>supported release versions of Ipopt, not the work
that happened several</div>
<div>years ago in the experimental distributed-memory
parallel branch</div>
<div><a moz-do-not-send="true"
href="https://projects.coin-or.org/Ipopt/browser/branches/parallel"
target="_blank">https://projects.coin-or.org/Ipopt/browser/branches/parallel</a>
. I know</div>
<div>it’s possible to use the MPI version of MUMPS
with a single process just</div>
<div>fine, or use MPI to parallelize user function
evaluations, or run separate</div>
<div>completely independent instances of Ipopt solving
different optimization</div>
<div>problems simultaneously. What I’m asking about is
running the MUMPS linear</div>
<div>solver across multiple processes with MPI and
achieving measured parallel</div>
<div>speedup on a single optimization problem.</div>
<div> </div>
<div>I have done this for the multithreaded solvers
WSMP/MA86/MA97/Pardiso,</div>
<div>or to a limited extent via multithreaded BLAS,
but never personally with</div>
<div>MPI MUMPS. I’d like to find out whether it has
been done before, in case</div>
<div>it is possible but just complicated to do. If
you’ve tried to do this but</div>
<div>failed, I'd be interested to hear how far you
got.</div>
<div> </div>
<div>Parts of Ipopt outside of MUMPS are not
MPI-aware, so running multiple</div>
<div>copies of an ipopt executable with mpirun will
probably duplicate all</div>
<div>computations outside of the linear solver. I’m
not actually sure what would</div>
<div>happen if each copy of ipopt thinks it’s solving
an entire optimization</div>
<div>problem but MUMPS is trying to decompose the
linear system over MPI.</div>
<div>I’d be surprised if it worked, but also happy to
be proven wrong here.</div>
<div> </div>
<div>Thanks in advance,</div>
<div>Tony</div>
<div> </div>
</div>
</div>
</div>
<br>
_______________________________________________<br>
Ipopt mailing list<br>
<a moz-do-not-send="true"
href="mailto:Ipopt@list.coin-or.org">Ipopt@list.coin-or.org</a><br>
<a moz-do-not-send="true"
href="http://list.coin-or.org/mailman/listinfo/ipopt"
target="_blank">http://list.coin-or.org/mailman/listinfo/ipopt</a><br>
<br>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Ipopt mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Ipopt@list.coin-or.org">Ipopt@list.coin-or.org</a>
<a class="moz-txt-link-freetext" href="http://list.coin-or.org/mailman/listinfo/ipopt">http://list.coin-or.org/mailman/listinfo/ipopt</a>
</pre>
</blockquote>
<br>
</body>
</html>