[Numpy-discussion] multiprocessing shared arrays and numpy

Francesc Alted faltet@pytables....
Wed Mar 3 09:35:51 CST 2010


A Wednesday 03 March 2010 15:31:29 Jesper Larsen escrigué:
> Hi people,
> 
> I was wondering about the status of using the standard library
> multiprocessing module with numpy. I found a cookbook example last
> updated one year ago which states that:
> 
> "This page was obsolete as multiprocessing's internals have changed.
> More information will come shortly; a link to this page will then be
> added back to the Cookbook."
> 
> http://www.scipy.org/Cookbook/multiprocessing
> 
> I also found the code that used to be on this page in the cookbook but
> it does not work any more. So my question is:
> 
> Is it possible to use numpy arrays as shared arrays in an application
> using multiprocessing and how do you do it?

Yes, it is pretty easy if your problem can be vectorised.  Just split your 
arrays in chunks and assign the computation of each chunk to a different 
process.  I'm attaching a code that does this for computing a polynomial on a 
certain range.  Here it is the output (for a dual-core processor):

Serial computation...
10000000 0
Time elapsed in serial computation: 3.438
3333333 0
3333334 1
3333333 2
Time elapsed in parallel computation: 2.271 with 3 threads
Speed-up: 1.51x


-- 
Francesc Alted
-------------- next part --------------
A non-text attachment was scrubbed...
Name: poly-mp.py
Type: text/x-python
Size: 989 bytes
Desc: not available
Url : http://mail.scipy.org/pipermail/numpy-discussion/attachments/20100303/8d7d8510/attachment.py 


More information about the NumPy-Discussion mailing list