[SciPy-User] [mpi4py] MPI, threading, and the GIL
Sat Sep 10 16:46:32 CDT 2011
Den 09.09.2011 21:45, skrev Aron Ahmadia:
> Hey Matt,
> /More specifically, our iterative algorithm needs to send data from
> rank N to rank N+1, but the rank N+1 processor doesn't need this data
> immediately - it has to do a few other things before it needs it. For
> each MPI process, I have three threads: one thread for computations,
> one thread for doing MPI sends, and one thread for doing MPI receives.
> This is not idiomatic MPI. You can do the same thing with a single
> thread (and avoid GIL issues) by posting non-blocking sends and
> receives (MPI_Isend/MPI_Irecv) when you have the data to send and then
> issuing a 'wait' when you need the data to proceed on the receiving end.
Idiomatic MPI or not, threads and blocking i/o is almost always easier
to work with than asynchronous i/o. An MPI-wrapper for Python should
release the GIL to allow multiplexing of blocking i/o calls.
If the MPI implementation does not have re-entrant MPI_Send and MPI_Recv
methods, one might argue if (1) the GIL should be kept or (2) an
explicit lock should be required in the Python code. I would probably
prefer the latter (2) to avoid tying up the interpreter for other
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User