[SciPy-user] Python on Intel Xeon Dual Core Machine

David Cournapeau david@ar.media.kyoto-u.ac...
Tue Feb 5 03:54:27 CST 2008


Matthieu Brucher wrote:
>
>
> 2008/2/4, Lorenzo Isella <lorenzo.isella@gmail.com 
> <mailto:lorenzo.isella@gmail.com>>:
>
>     Hello,
>     And thanks for your reply.
>     A small aside: I am getting interested into parallel computing with
>     Python since I am a bit surprised at the fact that postprocessing some
>     relatively large arrays of data (5000 by 5000) takes a lot of time and
>     memory on my laptop, but the situation does not improve dramatically
>     on my desktop, which has more memory and is a 64-bit machine (with the
>     amd64 Debian).
>     A question: if I use arrays in Scipy without any special declaration,
>     are they double precision arrays or something "more" as a default on
>     64-bit machines?
>     If the latter is true, then can I use a single declaration (without
>     chasing every single array) in order to default to standard double
>     precision arithmetic?
>     Cheers
>
>     Lorenzo
>
>
> The default is to use doubles on every platform (32 or 64 bits). BTW, 
> single precision is not faster than double precision for 
> not-vectorized loops (like additions), so if memory is not a problem, 
> Numpy's behaviour is the best ;). Using long doubles will not enhance 
> speed.
I am a bit suprised by this affirmation: at C level, float is certainly 
faster than double. It of course depends on many parameters, but for 
example ATLAS is (almost) twice faster for big matrices with float 
compared to double, on my two main machines: a pentium 4 and a CoreDuo2, 
which have extremely different behaviours with regard to their FPU. 
AFAIK, the different is mainly due to memory pressure (at CPU level, 
float and double are roughly the same, but this is not the limitation on 
currently available CPU).

cheers,

David


More information about the SciPy-user mailing list