[SciPy-User] Use of MPI in extension modules
Wed Nov 11 12:23:42 CST 2009
This is probably a better topic for the IPython users list:
I'm working on a signal & image processing library that uses MPI
internally. I'd like to provide a Python interface to it, so I can
> integrate it into SciPy. With 'normal' Python this all works nicely.
> Just recently I have started to consider parallelism, i.e. I want to use
> the library's internal parallelism, by running it with ipython in parallel.
> My assumption was that all the engines started via 'ipcluster mpiexec
> ..." would already have MPI_Init called, and thus, my extension modules
> would merely share the global MPI state with the Python interpreter.
> That doesn't seem to be the case, as I either see all my module
> instances report rank 0, or, if I don't call MPI_Init, get a failure on
> the first MPI call I do.
You do need to tell the IPython engine how the should call MPI_Init. The
best way of doing this
is to install mpi4py and then call ipcluster with the --mpi=mpi4py option.
Once you do this, you can simply import your extension module and use it -
you won't have
to call MPI_Init again. The reason that IPython need to be told how
MPI_Init is called
is that we try to make sure that the engine ids match the MPI ranks.
But, one question. Why not use mip4py for yor MPI calls? If you really
need low-level C stuff
mpi4py works very well with cython. All that would be much more pleasant
low level C/MPI code. The key is that mpi4py handles all the subtleties of
the different MPI
platforms, and OSs. Doing that yourself is quite painful.
Can anybody help ? Do I need to initialize MPI myself in my extension
> module ?
> Any pointers are highly appreciated.
> ...ich hab' noch einen Koffer in Berlin...
> SciPy-User mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User