[SciPy-user] running scipy code simultaneously on several machines
Fri Oct 12 10:42:30 CDT 2007
Mentioned below by Jarrod, IPython1 is probably the best solution for
this. Here is the simplest parallel implementation in IPython1:
In : import ipython1.kernel.api as kernel
In : rc = kernel.RemoteController(('127.0.0.1',10105))
In : rc.getIDs()
Out: [0, 1, 2, 3]
In : def my_func(A): return 'result'
In : rc.mapAll(my_func, range(16))
This partitions the input array (range(16)) amongst 4 processors,
calls my_func on each element and then gathers the result back.
This is the simplest approach, but IPython1 supports many other styles
and approaches, including a dynamically load balanced task farming
system. I don't know if you need it, but IPython1 also has full
integration with mpi.
Please let us know if you have questions.
On 10/12/07, Jaonary Rabarisoa <firstname.lastname@example.org> wrote:
> Hi all,
> I need to perform several times one python function that is very time
> consuming. Suppose
> to be simple that this function takes only one argument and return one
> value, so its prototype
> is as follow :
> def my_func(A) :
> return res
> I need too call this function for different values of A. A naive approach to
> do this is the following
> for A in my_array_of_A :
> res = my_func(A)
> My problem is that one call of my_func takes several hours. Then, I wonder
> if it's possible to distribute
> this "for" loop between several machines (or processors) in order to speed
> up the process.
> I've heard something about the cow module in scipy and pympi package but I
> just do not know how
> to tackle this probelm correctly with one of these modules. So, if one of
> you could give some hints in how to do this ?
> Best regards,
> SciPy-user mailing list
More information about the SciPy-user