[SciPy-user] Python on Intel Xeon Dual Core Machine

Lorenzo Isella lorenzo.isella@gmail....
Mon Feb 4 02:52:30 CST 2008


Hello,
And thanks for your reply.
A small aside: I am getting interested into parallel computing with
Python since I am a bit surprised at the fact that postprocessing some
relatively large arrays of data (5000 by 5000) takes a lot of time and
memory on my laptop, but the situation does not improve dramatically
on my desktop, which has more memory and is a 64-bit machine (with the
amd64 Debian).
A question: if I use arrays in Scipy without any special declaration,
are they double precision arrays or something "more" as a default on
64-bit machines?
If the latter is true, then can I use a single declaration (without
chasing every single array) in order to default to standard double
precision arithmetic?
Cheers

Lorenzo


> Date: Sun, 3 Feb 2008 22:55:04 +0200
> From: Stefan van der Walt <stefan@sun.ac.za>
> Subject: Re: [SciPy-user] Python on Intel Xeon Dual Core Machine
> To: scipy-user@scipy.org
> Message-ID: <20080203205504.GD25396@mentat.za.net>
> Content-Type: text/plain; charset=iso-8859-1
>
> Hi Lorenzo
>
> On Sat, Feb 02, 2008 at 04:22:14PM +0100, Lorenzo Isella wrote:
> > I am currently using a Python script on my box to post-process some
> > data (the process typically involves operations on 5000 by 5000
> > arrays).
> > The Python script also relies heavily on some R scripts (imported via
> > Rpy) and a compiled Fortran 90 routine (imported via f2py).
> > I have recently made a new Debian testing installation for the amd64
> > architecture on my machine [an Intel Xeon Dual-core pc] so I wonder if
> > there is any way to take advantage of both CPU's when running that
> > script.
> > Is it something which can be achieved "automatically" by installing
> > and calling some libraries? Do I have to re-write and re-think my
> > whole script?
>
> Using a parallelised linear algebra library may address most of your
> problems.  I think (and I hope someone will correct me if I'm wrong)
> that ATLAS can be compiled to use multiple threads, and I know MKL
> supports it as well.
>
> Another approach would be to parallelize the algorithm itself, using
> something like 'processing' (http://pypi.python.org/pypi/processing/).
>
> You can take that a step further by distributing the problem over
> several processes (running on one or more machines), using using
> ipython1 (http://ipython.scipy.org/moin/IPython1).
>
> Good luck!
>
> St?fan


More information about the SciPy-user mailing list