[SciPy-user] running scipy code simultaneously on several machines

Vincent Schut schut@sarvision...
Fri Oct 12 09:43:58 CDT 2007



Jaonary Rabarisoa wrote:
> Hi all,
>
> I need to perform several times one python function that is very time
> consuming. Suppose
> to be simple that this function takes only one argument and return one
> value, so its prototype
> is as follow :
>
> def my_func(A) :
> ....
>     return res
>
> I need too call this function for different values of A. A naive
> approach to do this is the following
>
> for A in my_array_of_A :
>         res = my_func(A)
>         all_res.append(res)
>
> My problem is that one call of my_func takes several hours. Then, I
> wonder if it's possible to distribute
> this "for" loop between several machines (or processors) in order to
> speed up the process.
>
> I've  heard something about the cow module in scipy and pympi package
> but I just do not know how
> to tackle this probelm correctly with one of these modules. So, if one
> of you could give some hints in how to do this ?
As an alternative, you could check out parallelpython:
www.parallelpython.org. Works like a charm here, both for smp processing
on one machine, and for cluster processing.

Cheers,
Vincent.
>
> Best regards,
>
> Jaonary
> ------------------------------------------------------------------------
>
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
>   



More information about the SciPy-user mailing list