[SciPy-User] Use of MPI in extension modules
Wed Nov 11 16:16:43 CST 2009
Stefan Seefeld skrev:
> the library's internal parallelism, by running it with ipython in parallel.
> My assumption was that all the engines started via 'ipcluster mpiexec
> ..." would already have MPI_Init called, and thus, my extension modules
> would merely share the global MPI state with the Python interpreter.
I don't know ipython, but I use MPI now and then.
You can e.g. spawn 4 processes of an executable using a statement like:
$ mpiexec -n 4 executable
Each process spawned ny mpiexec must call MPI_Init once and before any
other MPI call. The call to MPI_Init is global to the process, it does
not matter that Python extensions are DLLs. You need to call MPI_Init
exactly once in each MPI-spawned process, and it does not matter how:
- using ctypes
- in an extension module
- in C code embedding a Python interpreter
- in a modified Python interpreter
If you only get rank 0 reported, it means you spawned just one process.
That could happen if you forget to specify how many processes you want
in the call to mpiexec.
More information about the SciPy-User