[Numpy-discussion] Request for review: dynamic_cpu_branch

Charles R Harris charlesr.harris@gmail....
Mon Dec 22 23:15:18 CST 2008


On Mon, Dec 22, 2008 at 8:35 PM, David Cournapeau <
david@ar.media.kyoto-u.ac.jp> wrote:

> Hi,
>
>    I updated a small branch of mine which is meant to fix a problem on
> Mac OS X with python 2.6 (see
>
> http://projects.scipy.org/pipermail/numpy-discussion/2008-November/038816.html
> for the problem) and would like one core numpy developer to review it
> before I merge it.
>
>    The problem can be seen with the following test code:
>
> #include <python.h>
>
> int main()
> {
> #ifdef WORDS_BIGENDIAN
>    printf("Big endian macro defined\n");
> #else
>    printf("No big endian macro defined\n");
> #endif
>
>    return 0;
> }
>
> If I build the above with python 2.5 on mac os X (intel), then I get the
> message no big endian. But with my version 2.6 (installed from official
> binary), I get Big endian, which is obviously wrong for my machine. This
> is a problem in python, but we can fix it in numpy (which depends on
> this macro).
>
> The fix is simple: set our own NPY_BIG_ENDIAN/NPY_LITTLE_ENDIAN instead
> of relying on the python header one. More precisely:
>        - a header cpuarch.h has been added: it uses toolchain specific


Is there a good reason to use a separate file? I assume this header will
just end up being included in one of the others. Maybe you could put it in
the same header that sets up all the differently sized types.


>
> macro to set one of the NPY_TARGET_CPU_* macro. X86, AMD64, PPC, SPARC,
> S390, and PA_RISC are detected. (I obviously did not tested them all).
>        - NPY_LITTLE_ENDIAN is set for little endian, NPY_BIG_ENDIAN is
> set for big endian, according to the detected CPU (Or directly using
> endian.h if available).
>        - NPY_BYTE_ORDER is set to 4321 for big endian, 1234 for little
> endian (following glibc endian.h convention)
>        - endianess is set in the numpy headers at the time they are
> read (whenever you include it)
>        - remove any mention of WORDS_BIGENDIAN in the source code (only
> _signbit.c used it).
>

Let's get rid of _signbit.c and move the signbit function into
umath_funcs_c99.  It can also be simplified using NPY_INT32 for the integer
type. I'd go for a pointer cast and dereference myself but the current
implementation is pretty common and I don't think it matters much.

I think it is OK to set the order by the CPU type. The PPC might be a bit
iffy, but I don't know of any products using its bigendian mode -- not that
there aren't any. Is there any simple way that someone who needs a special
case can override the automatic settings?


> I don't like so much depending on CPU detection, but OTOH, the only
> other solution I can see would be to have numpy headers which do not
> rely on endianness at all, which does not seem possible without breaking
> some API (the macro which test for endianness: PyArray_ISNBO and all the
> other ones which depend on it, including  PyArray_ISNOTSWAPPED).
>

Do what you gotta do.  It sounds like the CPU can be determined from a macro
set by the compiler, is that so?

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20081222/bb4abdfb/attachment.html 


More information about the Numpy-discussion mailing list