[Ipopt] MPI Issue in IPOPT 3.11.7 / Mumps
Damien
damien at khubla.com
Fri Feb 21 00:50:37 EST 2014
Apologies for my previous reply about sequential, I didn't read it
correctly. What you're seeing is the result of MPI_Init() being called
twice without an MPI_Finalize() between the calls. I don't have the
Ipopt code in front of me right now, but: If there's an exit path from
the parallel MPI MUMPS that doesn't call MPI_Finalize() after a
completed Ipopt solve, when you warm-start Ipopt, MPI_Init gets called
again and MPI aborts.
Making Ipopt fully MPI compliant at the top level of Ipopt is not the
right software engineering answer. MUMPS is the MPI solver, so the
responsibility should be in the MUMPS wrapper. Keeping MPI alive or not
should be a MUMPS-specific option.
To check if MPI is initialised or not:
int is_initialized = 0;
MPI_Initialized(&is_initialized);
if(!is_initialized)
{
MPI_Init(&argc, &argv);
}
and to check if it's finalised:
int is_finalized = 0;
MPI_Finalized(&is_finalized);
if(!is_finalized)
{
MPI_Finalize();
}
Just putting these in will likely fix most of the problems, but if
there's a path to finish a solve without calling MPI_Finalize, that's a
bug that needs fixing.
Also note that the MUMPS fake MPI for sequential (in the MUMPS libseq
directory) doesn't have an implementation for MPI_Initialized or
MPI_Finalized and those stubs need to be added to compile the code above.
Cheers,
Damien
On 2014-02-20 9:20 PM, Tony Kelman wrote:
> Thanks Dominique!
>
> As I mentioned in the discussion that brought this up on Github
> (https://github.com/Homebrew/homebrew-science/issues/656), I think one
> possible approach might be to move the MPI_Init() and MPI_Finalize()
> out of the MumpsSolverInterface constructor/destructors, and put them
> in the Ipopt solver object's constructor/destructor instead. We'd need
> to do some ifdef'ing to skip calling MPI_* if we don't have access to
> the commonly-used fake MPI headers from sequential Mumps.
>
> However I'm not so sure if that would completely solve the problem,
> and I wanted to get others' opinions here who may have better ideas or
> have come across this before.
>
> -Tony
>
> -----Original Message----- From: Dominique Orban
> Sent: Thursday, February 20, 2014 8:04 PM
> To: ipopt at list.coin-or.org
> Subject: [Ipopt] MPI Issue in IPOPT 3.11.7 / Mumps
>
> Hi there,
>
> Building IPOPT 3.11.7 with clang/clang++ or gcc/g++ on OSX 10.8.5
> against MUMPS 4.10.0 and OpenMPI 1.7.4 results in the C test hs071_c
> hanging forever or crashing.
>
> There appears to be a problem with the MPI setup in the Mumps
> interface. The C test solves the problem twice (the second solve with
> warm starting). But this means that MPI_Init() is called twice and
> that leads to an error: https://gist.github.com/dpo/9120810
>
> A user suggested that IPOPT shouldn't call MPI_Init() at all and
> proposes a patch
> (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=736842) but
> applying this patch results in a situation where MPI_Init() is never
> called:
>
> The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
>
> Both my C++ and MPI knowledge are extremely limited. Somehow,
> MPI_Init() is called again when re-solving and/or MPI_Finalize() is
> never called. Or something else is going on... I got in touch with the
> Mumps guys to see if they have any advice.
>
> Cheers,
>
> Dominique
>
>
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt
> _______________________________________________
> Ipopt mailing list
> Ipopt at list.coin-or.org
> http://list.coin-or.org/mailman/listinfo/ipopt
More information about the Ipopt
mailing list