[Numpy-discussion] Numpy and OpenMP

Anne Archibald peridot.faceted@gmail....
Sat Mar 15 18:33:51 CDT 2008

On 15/03/2008, Damian Eads <eads@soe.ucsc.edu> wrote:
> Robert Kern wrote:
>  > Eric Jones tried to use multithreading to split the computation of
>  > ufuncs across CPUs. Ultimately, the overhead of locking and unlocking
>  > made it prohibitive for medium-sized arrays and only somewhat
>  > disappointing improvements in performance for quite large arrays. I'm
>  > not familiar enough with OpenMP to determine if this result would be
>  > applicable to it. If you would like to try, we can certainly give you
>  > pointers as to where to start.
> Perhaps I'm missing something. How is locking and synchronization an
>  issue when each thread is writing to a mutually exclusive part of the
>  output buffer?

The trick is to efficiently allocate these output buffers. If you
simply give each thread 1/n th of the job, if one CPU is otherwise
occupied it doubles your computation time. If you break the job into
many pieces and let threads grab them, you need to worry about locking
to keep two threads from grabbing the same piece of data. Plus,
depending on where things are in memory you can kill performance by
abusing the caches (maintaining cache consistency across CPUs can be a
challenge). Plus a certain amount of numpy code depends on order of

a[:-1] = 2*a[1:]

Correctly handling all this can take a lot of overhead, and require a
lot of knowledge about hardware. OpenMP tries to take care of some of
this in a way that's easy on the programmer.

To answer the OP's question, there is a relatively small number of C
inner loops that could be marked up with OpenMP #pragmas to cover most
matrix operations. Matrix linear algebra is a separate question, since
numpy/scipy prefers to use optimized third-party libraries - in these
cases one would need to use parallel linear algebra libraries (which
do exist, I think, and are plug-compatible). So parallelizing numpy is
probably feasible, and probably not too difficult, and would be
valuable. The biggest catch, I think, would be compilation issues - is
it possible to link an OpenMP-compiled shared library into a normal


More information about the Numpy-discussion mailing list