[SciPy-user] Python on Intel Xeon Dual Core Machine
Tue Feb 5 15:42:26 CST 2008
If you're interested in using MPI in python I got started by going
through some general tutorials like those at the LAM site
(http://www.lam-mpi.org/) and modifying some of the example scripts
provided with pypar (http://datamining.anu.edu.au/~ole/pypar/). pypar is
nice in that it provides a very simple, stripped down interface to MPI
though I think there are more complete, robust versions these days like
mpi4py (which when I get time to get back to hacking some parallel code
I mean to start using). Ipython1
(http://ipython.scipy.org/moin/IPython1) is also a nice way to do
parallel programming but it's kind of be nice to start with something
simple like pypar which gives you a fairly limited range of options.
There are probably better ways of generally doing parallel coding these
days, i.e. combining threads and distributed memory models - I know
there are some experts on this list far more qualified than I to provide
>And thanks everybody for the many replies.
>I partially solved the problem adding some extra RAM memory.
>A rather primitive solution, but now my desktop does not use any swap memory and the code runs faster.
>Unfortunately, the nature of the code does not easily lend itself to being split up into easier tasks.
>However, apart from the parallel python homepage, what is your recommendation for a beginner who wants a smattering in parallel computing (I have in mind C and Python at the moment)?
>Date: Mon, 4 Feb 2008 08:21:34 -0600
>From: "Bruce Southey" <firstname.lastname@example.org>
>Subject: Re: [SciPy-user] Python on Intel Xeon Dual Core Machine
>To: "SciPy Users List" <email@example.com>
>Content-Type: text/plain; charset=ISO-8859-1
>There is no general recommendation and it really does depend on what
>the scripts are doing. It is not trivial to identify what steps can be
>made parallel and can be even more complex to implement parallel
>Given that you are calling R (yes I know R can run in parallel), you
>need to rethink and redesign your problem. If the script can be split
>into independent pieces (and I really mean completely independent)
>then just use threads such as the handythread.py code Anne Archibald
>provided on the numpy list or the Python Cookbook. (I would also
>suggest searching the numpy list especially for Anne's replies on
>this.) Otherwise you will have to learn sufficient about parallel
>SciPy-user mailing list
Center for Imaging of Neurodegenerative Diseases, UCSF
VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab
4150 Clement Street FAX: (415) 668-2864
San Francisco, CA 94121 Email: karl young at ucsf edu
More information about the SciPy-user