[Numpy-discussion] Numpy and OpenMP
Sat Mar 15 19:03:55 CDT 2008
Scott Ransom wrote:
> On Sat, Mar 15, 2008 at 07:33:51PM -0400, Anne Archibald wrote:
>> To answer the OP's question, there is a relatively small number of C
>> inner loops that could be marked up with OpenMP #pragmas to cover most
>> matrix operations. Matrix linear algebra is a separate question, since
>> numpy/scipy prefers to use optimized third-party libraries - in these
>> cases one would need to use parallel linear algebra libraries (which
>> do exist, I think, and are plug-compatible). So parallelizing numpy is
>> probably feasible, and probably not too difficult, and would be
> OTOH, there are reasons to _not_ want numpy to automatically use
> OpenMP. I personally have a lot of multi-core CPUs and/or
> multi-processor servers that I use numpy on. The way I use numpy
> is to run a bunch of (embarassingly) parallel numpy jobs, one for
> each CPU core. If OpenMP became "standard" (and it does work well
> in gcc 4.2 and 4.3), we definitely want to have control over how
> it is used...
"embarassingly parallel" spliting is just fine in some cases (KISS) but IMHO there is a point to get OpenMP into numpy.
Look at the g++ people : They have added a parallel version of the C++ STL into gcc4.3. Of course the non paralell one is still the standard/defaut one but here is the trend.
For now we have no easy way to perform A = B + C on more than one CPU in numpy (except the limited embarassingly parallel paradigm)
Yes, we want to be able to tune and to switch off (by default?) the numpy threading capability, but IMHO having this threading capability will always be better than a fully non paralell numpy.
>> The biggest catch, I think, would be compilation issues - is
>> it possible to link an OpenMP-compiled shared library into a normal
> I think so. The new gcc compilers use the libgomp libraries to
> provide the OpenMP functionality. I'm pretty sure those work just
> like any other libraries.
More information about the Numpy-discussion