[Numpy-discussion] Question about Optimization (Inline, and Pyrex)

Matthieu Brucher matthieu.brucher@gmail....
Tue Apr 17 15:03:48 CDT 2007


I would say that if the underlying atlas library is multithreaded, numpy
operations will be as well. Then, at the Python level, even if the
operations take a lot of time, the interpreter will be able to process
threads, as the lock is freed during the numpy operations - as I understood
for the last mails, only one thread can access the interpreter at a specific
time -

Matthieu

2007/4/17, James Turner <jturner@gemini.edu>:
>
> Hi Anne,
>
> Your reply to Lou raises a naive follow-up question of my own...
>
> > Normally, python's multithreading is effectively cooperative, because
> > the interpreter's data structures are all stored under the same lock,
> > so only one thread can be executing python bytecode at a time.
> > However, many of numpy's vectorized functions release the lock while
> > running, so on a multiprocessor or multicore machine you can have
> > several cores at once running vectorized code.
>
> Are you saying that numpy's vectorized functions will perform a single
> array operation in parallel on a multi-processor machine, or just that
> the user can explicitly write threaded code to run *multiple* array
> operations on different processors at the same time? I hope that's not
> too stupid a question, but I haven't done any threaded programming yet
> and the answer could be rather useful...
>
> Thanks a lot,
>
> James.
>
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20070417/60f8e9f5/attachment.html 


More information about the Numpy-discussion mailing list