[SciPy-user] Numpy in parallel

David Warde-Farley dwf@cs.toronto....
Thu Apr 23 18:57:07 CDT 2009

On 23-Apr-09, at 4:26 AM, Ramon Crehuet wrote:

> My first question is: is this a good choice? Is this Numpy optimized  
> for
> my arquitecture or I could do much better compiling ATLAS and Numpy  
> from
> scratch? Or is Intel MKL better?

Almost certainly you could do better than you are currently by  
compiling your own ATLAS, I don't know about the OpenSUSE packages but  
distribution binaries usually can't be optimized very much since they  
can't count on everyone having certain CPU features.

The Intel MKL, while it can be used on AMD chips (apparently), is  
mainly designed to optimize the heck out of *Intel* chips, and from  
what I understand outperforms ATLAS on that hardware. I would not  
expect it to do very much better than ATLAS (if at all) on non-Intel  

> My second question is: I have to create, many times, a large array of
> normally distributed vectors (the stochastic term in a Langevin  
> dynamics
> run). Do you know if numpy.random.randn will be able to use multi- 
> cores?

Not on its own, I don't think, but since it's C code it should release  
the GIL.  This means that you can use the Python threading module and  
the different threads (which are full POSIX threads on Linux) can run  
simultaneously on your multiple cores: http://docs.python.org/library/threading.html

The multiprocessing module in Python 2.6 (or available in Python 2.5  
if you install it from PyPI) let's you do the same thing but with full  


More information about the SciPy-user mailing list