[SciPy-User] Speeding things up - how to use more than one computer core
Mon Apr 8 11:55:58 CDT 2013
On Mon, Apr 08, 2013 at 07:44:20AM -0500, J. David Lee wrote:
> I've used shared memory arrays in the past, and it's actually quite easy. They
> can be created using the multiprocessing module in a couple of lines,
> mp_arr = multiprocessing.Array(ctypes.c_double, 100)
> arr = np.frombuffer(mp_arr.get_obj())
I believe that this does synchronization by message passing. Look at the
corresponding multiprocessing code if you want to convince yourself. Thus
you are not in fact sharing the memory between processes.
> I've wondered in the past why creating a shared memory array isn't a single
> line of code using numpy, as it can be so useful.
Because there is no easy cross-platform way of doing it.
> If you can, you might want to consider writing your code in a C module and
> using openMP if it works for you. I've had very good luck with that, and it's
> really easy to use.
In certain cases, I would definitaly agree with you here. Recent versions
of cython make that really easy.
More information about the SciPy-User