[Numpy-discussion] newbie question - large dataset

Stefan van der Walt stefan@sun.ac...
Sat Apr 7 15:12:11 CDT 2007

On Sat, Apr 07, 2007 at 02:48:47PM -0400, Anne Archibald wrote:
> If none of those algorithmic improvements are possible, you can look
> at other possibilities for speeding things up (though the speedups
> will be modest). Parallelism is an obvious one - if you've got a
> multicore machine you may be able to cut your processing time by a
> factor of the number of cores you have available with minimal effort
> (for example by replacing a for loop with a simple foreach,
> implemented as in the attached file).

Would this code speed things up under Python?  I was under the
impression that there is only one process, irrespective of whether or
not "threads" are used, and that the global interpreter lock is used
when swapping between threads to make sure that only one executes at
any instance in time.

If my above understanding is correct, it would be better to use a
multi-process engine like IPython1.


More information about the Numpy-discussion mailing list