[Numpy-discussion] Request for review: dynamic_cpu_branch

David Cournapeau david@ar.media.kyoto-u.ac...
Mon Dec 22 23:14:00 CST 2008


Charles R Harris wrote:
>
>
> On Mon, Dec 22, 2008 at 8:35 PM, David Cournapeau
> <david@ar.media.kyoto-u.ac.jp <mailto:david@ar.media.kyoto-u.ac.jp>>
> wrote:
>
>     Hi,
>
>        I updated a small branch of mine which is meant to fix a problem on
>     Mac OS X with python 2.6 (see
>     http://projects.scipy.org/pipermail/numpy-discussion/2008-November/038816.html
>     for the problem) and would like one core numpy developer to review it
>     before I merge it.
>
>        The problem can be seen with the following test code:
>
>     #include <python.h>
>
>     int main()
>     {
>     #ifdef WORDS_BIGENDIAN
>        printf("Big endian macro defined\n");
>     #else
>        printf("No big endian macro defined\n");
>     #endif
>
>        return 0;
>     }
>
>     If I build the above with python 2.5 on mac os X (intel), then I
>     get the
>     message no big endian. But with my version 2.6 (installed from
>     official
>     binary), I get Big endian, which is obviously wrong for my
>     machine. This
>     is a problem in python, but we can fix it in numpy (which depends on
>     this macro).
>
>     The fix is simple: set our own NPY_BIG_ENDIAN/NPY_LITTLE_ENDIAN
>     instead
>     of relying on the python header one. More precisely:
>            - a header cpuarch.h has been added: it uses toolchain specific
>
>
> Is there a good reason to use a separate file? I assume this header
> will just end up being included in one of the others. Maybe you could
> put it in the same header that sets up all the differently sized types.

None other than I don't like big source files or big headers.

>
> Let's get rid of _signbit.c and move the signbit function into
> umath_funcs_c99.  It can also be simplified using NPY_INT32 for the
> integer type. I'd go for a pointer cast and dereference myself but the
> current implementation is pretty common and I don't think it matters much.

If _signbit.c can be changed, so be it. But it does not impact the patch
as it is. There are only two places which use WORD_BIGENDIAN directly:
_signbit.c and mconf.h (in scipy).

>
> I think it is OK to set the order by the CPU type. The PPC might be a
> bit iffy, but I don't know of any products using its bigendian mode

I guess you meant little endian. Yes, setting endianness by processor is
iffy, but there is no way around it: either we detect endianness at
runtime, and deal with it correctly, or if we depend on it in the
headers as we currently do, we need some way to detect it from some
macros set by the system. I prefer the CPU solution to the platform
specific one (most systems have some way to detect this kind of things I
guess by including some headers), specially since I think detecting the
CPU can be useful for other things later (SSE optimization for example).

> Is there any simple way that someone who needs a special case can
> override the automatic settings?

No. We could add a way to override the CPU-based detection to force it.
But since distutils is so limited in that regard, I think people will
need to go into the sources anyway, so I am not sure it is really
worthwhile.

What bothers me more is that this kind of misconfiguration
(little-endian ppc) goes undetected. We could add a runtime check, at
least ?

>
>
>     I don't like so much depending on CPU detection, but OTOH, the only
>     other solution I can see would be to have numpy headers which do not
>     rely on endianness at all, which does not seem possible without
>     breaking
>     some API (the macro which test for endianness: PyArray_ISNBO and
>     all the
>     other ones which depend on it, including  PyArray_ISNOTSWAPPED).
>
>
> Do what you gotta do.  It sounds like the CPU can be determined from a
> macro set by the compiler, is that so?

Yep. I gather the values from glibc and boost, which should cover quite
a number of platforms I think.

David


More information about the Numpy-discussion mailing list