[SciPy-user] distributed computing

Prabhu Ramachandran prabhu at aero.iitm.ernet.in
Fri Jun 4 12:42:29 CDT 2004

>>>>> "FP" == Fernando Perez <Fernando.Perez at colorado.edu> writes:

    FP> H Jansen wrote:
    >> I want to develop a complex, real-time scientific computing
    >> application on a 4-node (4x2) multiprocessor (4 dual-processor
    >> boards). What is the best distributed computation model that I
    >> should use: message-passing (MPI), client-server computation
    >> agents, threads, ... ? Has anyone some suggestion? Thanks.

    FP> Keep in mind that if you use a threading model, you'll be
    FP> bitten by the Python GIL (Global Interpreter Lock).  Basically
    FP> it means that as long as you are running inside pure python
    FP> code, only one thread runs at a time.  Your C extensions can
    FP> release the GIL, so with a bit of planning the problem is
    FP> manageable.  MPI has python wrappers, so that's an option as
    FP> well.

The answer really depends on what kind of distributed computing he
wants to do.  If the problem is not embarrassingly parallel and there
is a lot of communication necessary MPI sounds like a good option.
Konrad Hinsen presented a talk on BSP (Bulk Synchronous Parallelism)
last year at SciPy'03.  I am not sure if your problem is suited to it.

If non-blocking calls are all you need you might want to try PyPar


Its easy to install, works on Numeric arrays, Pythonic and easy to
use.  It does not require a specially compiled Python interpreter


More information about the SciPy-user mailing list