[SciPy-dev] Possible PPC float bug with special.ndtr

Tom Loredo loredo@astro.cornell....
Sun Oct 21 23:19:06 CDT 2007


Hi folks-

A bit of code misbehaved with my first test case in a bewildering
manner.  I was coding on a PPC; it seems the test failure may be
due to a bug on PPC, as the code works fine on OS X Intel and
Linux Intel.

It boils down to this simple example.  Here is the (expected)
behavior on both Intel platforms:

In [1]: from scipy.special import ndtr

In [2]: ndtr(1.)
Out[2]: 0.841344746069

In [3]: arg=(1.2-1.)/.2

In [4]: arg
Out[4]: 0.99999999999999978

In [5]: ndtr(arg)
Out[5]: 0.841344746069

Here is the (unexpected) behavior on PPC (OS X, Python 2.4.4,
numpy 1.0.3.1, scipy 0.5.2.1):

In [1]: from scipy.special import ndtr

In [2]: ndtr(1.)
Out[2]: 0.841344746069

In [3]: arg = (1.2-1.)/.2

In [4]: arg
Out[4]: 0.99999999999999978

In [5]: ndtr(arg)
Out[5]: nan

In [6]: ndtr(arg+1.e-16)
Out[6]: nan

In [7]: ndtr(arg+2.e-16)
Out[7]: 0.841344746069

In [8]: ndtr(arg-1.e-10)
Out[8]: nan

In [9]: ndtr(arg-1.e-9)
Out[9]: 0.841344745827

I.e, there is a sliver of arguments near 1.0 where ndtr (or perhaps
erf or erfc, which it relies on) misbehaves (giving nan), but only 
on PPC.

Anyone else see this on PPC?  Should I submit this to Trac, or is it 
something already dealt with?  I took a peek at ndtr.c, but it's not
obvious what might be going on.  It looks like various constants
get defined in an architecture-dependent manner, and perhaps an
inaccurate definition is underlying this problem.

Thanks,
Tom


-------------------------------------------------
This mail sent through IMP: http://horde.org/imp/


More information about the Scipy-dev mailing list