[Numpy-discussion] Request for review: dynamic_cpu_branch

David Cournapeau david@ar.media.kyoto-u.ac...
Mon Dec 22 21:35:28 CST 2008


    I updated a small branch of mine which is meant to fix a problem on
Mac OS X with python 2.6 (see
for the problem) and would like one core numpy developer to review it
before I merge it.

    The problem can be seen with the following test code:

#include <python.h>

int main()
    printf("Big endian macro defined\n");
    printf("No big endian macro defined\n");

    return 0;

If I build the above with python 2.5 on mac os X (intel), then I get the
message no big endian. But with my version 2.6 (installed from official
binary), I get Big endian, which is obviously wrong for my machine. This
is a problem in python, but we can fix it in numpy (which depends on
this macro).

The fix is simple: set our own NPY_BIG_ENDIAN/NPY_LITTLE_ENDIAN instead
of relying on the python header one. More precisely:
        - a header cpuarch.h has been added: it uses toolchain specific
macro to set one of the NPY_TARGET_CPU_* macro. X86, AMD64, PPC, SPARC,
S390, and PA_RISC are detected. (I obviously did not tested them all).
        - NPY_LITTLE_ENDIAN is set for little endian, NPY_BIG_ENDIAN is
set for big endian, according to the detected CPU (Or directly using
endian.h if available).
        - NPY_BYTE_ORDER is set to 4321 for big endian, 1234 for little
endian (following glibc endian.h convention)
        - endianess is set in the numpy headers at the time they are
read (whenever you include it)
        - remove any mention of WORDS_BIGENDIAN in the source code (only
_signbit.c used it).
I don't like so much depending on CPU detection, but OTOH, the only
other solution I can see would be to have numpy headers which do not
rely on endianness at all, which does not seem possible without breaking
some API (the macro which test for endianness: PyArray_ISNBO and all the
other ones which depend on it, including  PyArray_ISNOTSWAPPED).



More information about the Numpy-discussion mailing list