[Numpy-discussion] Fast threading solution thoughts
Thu Feb 12 00:50:16 CST 2009
Brian Granger wrote:
>> I am curious: would you know what would be different in numpy's case
>> compared to matlab array model concerning locks ? Matlab, up to
>> recently, only spreads BLAS/LAPACK on multi-cores, but since matlab 7.3
>> (or 7.4), it also uses multicore for mathematical functions (cos,
>> etc...). So at least in matlab's model, it looks like it can be useful.
> Good point. Is it possible to tell what array size it switches over
> to using multiple threads? Also, do you happen to iknow how Matlab is
> doing this?
No - I have never seen deep explanation of the matlab model. The C api
is so small that it is hard to deduce anything from it (except that the
memory handling is not ref-counting-based, I don't know if it matters
for our discussion of speeding up ufunc). I would guess that since two
arrays cannot share data (COW-based), lock handling may be easier to
deal with ? I am not really familiar with multi-thread programming (my
only limited experience is for soft real-time programming for audio
processing, where the issues are totally different, since latency
matters as much if not more than throughput).
> True, but I would be happy to just have a fast C based threadpool
> implentation I could use in low level Cython based loops.
Matlab has a parallel toolbox to do this kind of things in matlab (I
don't know in C). I don't know anything about it, nor do I know if that
can be applied in any way to python/numpy's case:
More information about the Numpy-discussion