[Numpy-discussion] Best way to run python parallel

Brian Granger ellisonbg.net@gmail....
Thu Mar 29 18:10:55 CDT 2007


We looked at the BSP model at various points in implementing the
parallel IPython stuff.  While I wouldn't say that IPython uses a BSP
model, there are some similarities.  But in the broader realm of
scientific computing, BSP has never really caught on like MPI has - in
spite of having some nice ideas like being able to predict the
performance of a parallel BSP code.

The main difficulty is that BSP is a much more limited model that MPI
- which is why you can predict the performance of BSP using codes.
The main limitation is that communication and computation cannot
overlap.  For some codes that is fine and there might be benefits for
using BSP over MPI.  For general parallel codes and algorithms
however, you do want to overlap communications and computation.  In
fact, if you don't you can severely limit the scalability of your
code.

Brian





On 3/29/07, Sebastian Haase <haase@msg.ucsf.edu> wrote:
> Hi,
> What is the general feeling towards  BSP on this list !?
> I remeber Konrad Hinsen advertising it on the SciPy workshop '03 .
> It is supposed to be much simpler to use than MPI, yet still powerful
> and flexible enough for most all applications.
> It is part of Konrad's ScientificPython ( != SciPy )
>
> Some links are here:
> http://www.bsp-worldwide.org/
> http://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel
>
> Evaluating Scientific Python/BSP on selected parallel computers
> http://ove.nipen.no/diplom/
>
> http://dirac.cnrs-orleans.fr/plone/software/scientificpython/
>
> - Sebastian Haase
>
>
>
> On 3/29/07, Peter Skomoroch <peter.skomoroch@gmail.com> wrote:
> >
> >
> > If you want to use PyMPI or PyPar, I'm writing a series of tutorials on how to
> > get them running on Amazon EC2,
> >
> > http://www.datawrangling.com/on-demand-mpi-cluster-with-python-and-ec2-part-1-of-3.html
> >
> >
> > I'm using PyMPI on a 20 node EC2 cluster and everything seems groovy, but I'm
> > relatively new to MPI, so I have probably overlooked some easier solutions.
> >
> > Any feedback on the writeups from Python gurus would be appreciated.
> >
> > -Pete
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


More information about the Numpy-discussion mailing list