[SciPy-User] [mpi4py] MPI, threading, and the GIL
Mon Sep 12 10:33:08 CDT 2011
I think the problem was not the MPI wrapper, but that other parts of
the code were hogging the GIL so that my MPI calls were not being
called when I thought they would.
Regardless, Aaron's suggestion was a good one: I added calls to post
receive requests early, then simply issued a 'wait' when I needed the
data. No more threading, just simple MPI calls. As Lisandro pointed
out, this probably worked well for me since I am using an MPI library
that has a progress thread.
On Sat, Sep 10, 2011 at 5:46 PM, Sturla Molden <email@example.com> wrote:
> Den 09.09.2011 21:45, skrev Aron Ahmadia:
> Hey Matt,
> This is not idiomatic MPI. You can do the same thing with a single thread
> (and avoid GIL issues) by posting non-blocking sends and receives
> (MPI_Isend/MPI_Irecv) when you have the data to send and then issuing a
> 'wait' when you need the data to proceed on the receiving end.
> Idiomatic MPI or not, threads and blocking i/o is almost always easier to
> work with than asynchronous i/o. An MPI-wrapper for Python should release
> the GIL to allow multiplexing of blocking i/o calls.
> If the MPI implementation does not have re-entrant MPI_Send and MPI_Recv
> methods, one might argue if (1) the GIL should be kept or (2) an explicit
> lock should be required in the Python code. I would probably prefer the
> latter (2) to avoid tying up the interpreter for other pending tasks.
> SciPy-User mailing list
More information about the SciPy-User