[Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1
Mon Mar 23 19:53:18 CDT 2009
2009/3/24 Charles R Harris <firstname.lastname@example.org>:
> On Mon, Mar 23, 2009 at 12:34 PM, Robert Pyle <email@example.com>
>> Hi all,
>> This is a continuation of something I started last week, but with a
>> more appropriate subject line.
>> To recap, my machine is a dual G5 running OS X 10.5.6, my python is
>> Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008,
>> and numpy 1.3.0b1 was installed from the source tarball in the
>> straightforward way with
>> sudo python setup.py install
>> On Mar 19, 2009, at 3:46 PM, Charles R Harris wrote:
>> > On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote:
>> > Thanks for tracking this! Can you check what your platform gives for:
>> > > import numpy as np
>> > > info = np.finfo(np.longcomplex)
>> > > print "eps:", info.eps, info.eps.dtype
>> > > print "tiny:", info.tiny, info.tiny.dtype
>> > > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps)
>> > eps: 1.3817869701e-76 float128
>> > tiny: -1.08420217274e-19 float128
>> > log10: nan nan
>> > The log of a negative number is nan, so part of the problem is the
>> > value of tiny. The size of the values also look suspect to me. On my
>> > machine
>> > In : finfo(longcomplex).eps
>> > Out: 1.084202172485504434e-19
>> > In : finfo(float128).tiny
>> > Out: array(3.3621031431120935063e-4932, dtype=float128)
>> > So at a minimum eps and tiny are reversed.
>> > I started to look at the code for this but my eyes rolled up in my
>> > head and I passed out. It could use some improvements...
>> > Chuck
>> I have chased this a bit (or perhaps 128 bits) further.
>> The problem seems to be that float128 is screwed up in general. I
>> tracked the test error back to lines 95-107 in
>> Here is a short program built from these lines that demonstrates what
>> I believe to be at the root of the test failure.
>> #! /usr/bin/env python
>> import numpy as np
>> import binascii as b
>> def t(type="float"):
>> max_iterN = 10000
>> print "\ntesting %s" % type
>> a = np.array([1.0],type)
>> one = a
>> zero = one - one
>> for _ in xrange(max_iterN):
>> a = a + a
>> temp = a + one
>> temp1 = temp - a
>> print _+1, b.b2a_hex(temp), temp1
>> if any(temp1 - one != zero):
>> if __name__ == '__main__':
>> This tries to find the number of bits in the significand by
>> calculating ((2.0**n)+1.0) for increasing n, and stopping when the sum
>> is indistinguishable from (2.0**n), that is, when the added 1.0 has
>> fallen off the bottom of the significand.
>> My print statement shows the power of 2.0, the hex representation of
>> ((2.0**n)+1.0), and the difference ((2.0**n)+1.0) - (2.0**n), which
>> one expects to be 1.0 up to the point where the added 1.0 is lost.
>> Here are the last few lines printed for float32:
>> 19 49000010 [ 1.]
>> 20 49800008 [ 1.]
>> 21 4a000004 [ 1.]
>> 22 4a800002 [ 1.]
>> 23 4b000001 [ 1.]
>> 24 4b800000 [ 0.]
>> You can see the added 1.0 marching to the right and off the edge at 24
>> Similarly, for float64:
>> 48 42f0000000000010 [ 1.]
>> 49 4300000000000008 [ 1.]
>> 50 4310000000000004 [ 1.]
>> 51 4320000000000002 [ 1.]
>> 52 4330000000000001 [ 1.]
>> 53 4340000000000000 [ 0.]
>> There are 53 bits, just as IEEE 754 would lead us to hope.
>> However, for float128:
>> 48 42f00000000000100000000000000000 [1.0]
>> 49 43000000000000080000000000000000 [1.0]
>> 50 43100000000000040000000000000000 [1.0]
>> 51 43200000000000020000000000000000 [1.0]
>> 52 43300000000000010000000000000000 [1.0]
>> 53 43400000000000003ff0000000000000 [1.0]
>> 54 43500000000000003ff0000000000000 [1.0]
>> Something weird happens as we pass 53 bits. I think lines 53 and 54
>> *should* be
> PPC stores long doubles as two doubles. I don't recall exactly how the two
> are used, but the result is that the numbers aren't in the form you would
> expect. Long doubles on the PPC have always been iffy, so it is no surprise
> that machar fails. The failure on SPARC quad precision bothers me more.
> I think the easy thing to do for the 1.3 release is to fix the precision
> test to use a hardwired range of values, I don't think testing the extreme
> small values is necessary to check the power series expansion. But I have
> been leaving that fixup to Pauli.
> Longer term, I think the values in finfo could come from npy_cpu.h and be
> hardwired in.
I don't think it is a good idea: long double support depends on 3
things (CPU, toolchain, OS), so hardwiring them would be a nightmare,
since the number of cases could easily go > 100.
> Anyhow, PPC is an exception in the way
> it treats long doubles and I'm not even sure it hasn't changed in some of
> the more recent models.
I have not been able to find a lot of information yet, but maybe part
of the problem is gcc on Mac OS X - maybe we would need to fix some
flags. I have an old ppc minimac at home, I will look at it in more
More information about the Numpy-discussion