[SciPy-user] Python on Intel Xeon Dual Core Machine

J. Ryan Earl jre@enthought....
Mon Feb 4 23:07:46 CST 2008


Lorenzo Isella wrote:
> I am a bit surprised at the fact that postprocessing some
> relatively large arrays of data (5000 by 5000) takes a lot of time and
> memory on my laptop, but the situation does not improve dramatically
> on my desktop, which has more memory and is a 64-bit machine (with the
> amd64 Debian).
> A question: if I use arrays in Scipy without any special declaration,
> are they double precision arrays or something "more" as a default on
> 64-bit machines?
I see a lot of confusion on this topic in general.  When people talk 
about a "64-bit" machine in general CPU terms, they're talking about its 
address space.  You're mixing up the size of address operands with the 
size of data operands.  With SSE[1-4] intructions 32-bit processors are 
able to work on 128-bit data operands, or packed 64-bit operands.  PPC 
can do similar though arguably better with its Altivect instructions.

64-bit is mainly going to be an advantage when you're working with 
processes that need to map more than 3GB of memory.  In respect to 
x86-64 (ie AMD64/EM64T) you also get a little bit of extra performance 
because a lot of the x86 cludge is cleaned up, and in particular it 
provides twice as many registers to work with than it does in 32-bit 
mode.  At best, you're looking at a 10% gain in performance over 
properly optimized 32-bit code if you're not memory constrained.  This 
performance is mainly from the compiler being able to more aggressively 
unroll loops into the extra registers.

-ryan


More information about the SciPy-user mailing list