[Numpy-discussion] Can we assume both FPU and ALU to have same endianness for numpy ?

David Cournapeau david@ar.media.kyoto-u.ac...
Tue Mar 10 11:05:14 CDT 2009

Francesc Alted wrote:
> A Tuesday 10 March 2009, David Cournapeau escrigué:
>> Hi,
>>     While working on portable macros for NAN, INF and co, I was
>> wondering why the current version of my code was working
>> (http://projects.scipy.org/numpy/browser/trunk/numpy/core/include/num
>> py/npy_math.h, first lines). I then realized that IEEE 754 did not
>> impose an endianness, contrary to my belief. The macros would fail if
>> the FPU and the ALU were using a different endianness. Is this still
>> a possibility on the architectures we want to support ?
> Could you be more explicit?  Currently, there is only a part of the 
> processor that does floating point arithmetic.  In old systems, there 
> was in a FPU located outside of the main processor, but in modern ones, 
> I'd say that the FPU is always integrated in the main ALU.

I am asking whether we can assume that both integer and floating point
representation uses the same endianness for all architectures we want to
support. I thought IEEE 754 imposed everything to be big endian, but
then discovered this was wrong.
> At any rate, having an ALU and FPU with different endianess sounds 
> *very* weird to my ears.

According to wikipedia, it is (was ?) possible:


Now, whether this happens with current architectures, I don't know. I
have tested my code on ppc, x86, x86_64 and sparc, and all of them share
the same endianness for ALU and FPU. But maybe some other don't (ARM ?
ARM is maybe the platform I am the less familiar with, but is
potentially one of the most interesting - with things like ARM-based
netbooks and other low-power devices; we can wait a while before idl or
matlab to be ported on ARM, I think :) ).



More information about the Numpy-discussion mailing list