[SciPy-User] Speeding things up - how to use more than one computer core

Ralf Gommers ralf.gommers@gmail....
Sat Apr 6 10:56:31 CDT 2013


On Sat, Apr 6, 2013 at 5:40 PM, Troels Emtekær Linnet <tlinnet@gmail.com>wrote:

> Dear Scipy users.
>
> I am doing analysis of some NMR data, where I repeatability are doing
> leastsq fitting.
> But I get a little impatient for the time-consumption. For a run of my
> data, it takes
> approx 3-5 min, but it in this testing phase, it is to slow.
>
> A look in my  task manager, show that I only consume 25%=1 core on my
> computer.
> And I have access to a computer with 24 cores, so I would like to speed
> things up.
> ------------------------------------------------
> I have been looking at the descriptions of multithreading/Multiprocess
> http://www.scipy.org/Cookbook/Multithreading
> http://stackoverflow.com/questions/4598339/parallelism-with-scipy-optimize
> http://www.scipy.org/ParallelProgramming
>
>
> But I hope someone can guide me, which of these two methods I should go
> for, and how to implement it?
> I am little unsure about GIL, synchronisation?, and such things, which I
> know none about.
>
> For the real data, I can see that I am always waiting for the call of the
> leastsq fitting.
> How can start a pool of cores when I go through fitting?
>

Have a look at http://pythonhosted.org/joblib/parallel.html, that should
allow you to use all cores without much effort. It uses multiprocessing
under the hood. That's assuming you have multiple fits that can run in
parallel, which I think is the case. I at least see some fits in a for-loop.

Ralf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20130406/7b838783/attachment.html 


More information about the SciPy-User mailing list