[SciPy-user] Python on Intel Xeon Dual Core Machine

Robert Kern robert.kern@gmail....
Tue Feb 5 14:47:28 CST 2008


J. Ryan Earl wrote:
> Lorenzo Isella wrote:
>> I am a bit surprised at the fact that postprocessing some
>> relatively large arrays of data (5000 by 5000) takes a lot of time and
>> memory on my laptop, but the situation does not improve dramatically
>> on my desktop, which has more memory and is a 64-bit machine (with the
>> amd64 Debian).
>> A question: if I use arrays in Scipy without any special declaration,
>> are they double precision arrays or something "more" as a default on
>> 64-bit machines?
> I see a lot of confusion on this topic in general.  When people talk 
> about a "64-bit" machine in general CPU terms, they're talking about its 
> address space.  You're mixing up the size of address operands with the 
> size of data operands.

He's not really confusing the two. Many systems change the size of the data 
operands based on the size of the address operands.

   http://en.wikipedia.org/wiki/64-bit#64-bit_data_models

As a general rule, though, only C integer types change size; the C standard is 
notoriously flexible in this regard. This has some downstream effects: Python's 
int object are stored with C longs and numpy's default "int" dtype is whatever 
size that is.

While a system could theoretically change its default floating point type based 
on the 64-bitness of the CPU/compiler combination, I've never seen anything do that.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco


More information about the SciPy-user mailing list