[Numpy-discussion] Array printing and another question
Fri Apr 11 08:35:26 CDT 2008
Thanks for the reply. The formatting code, such as it is, is below.
It uses Martin Jansche's
double.py (http://symptotic.com/mj/double/double.py) and then does some
simple bit twiddling.
I'm still hoping someone can help me find a way to use this format for
arrays of float64s.
import double as _double
ftype = _double.fpclassify(f)
assert ftype == 'NORMAL' or ftype == 'ZERO'
bits = _double.doubleToRawLongBits(f)
exponent = (bits >> 52) & 0x7ff
mantissa = bits & 0x000fFfffFfffFfffL
sign = (bits >> 63)
exponent -= 1023
pm = '+' if sign == 0 else '-'
s = '%s(%+05d)0x%013x' % (pm, exponent, mantissa)
assert len(s) == 23
pm = s
assert pm == '+' or pm == '-'
sign = 1 if pm == '-' else 0
assert s == '(' and s == ')'
e = s[2:7]
exponent = int(e, 10) + 1023
assert 0 <= exponent < 0x7ff
m = s[8:]
assert m[:2] == '0x'
mantissa = int(m, 16)
bits = (sign << 63) + (exponent << 52) + mantissa
Hans Meine wrote:
> Am Dienstag, 08. April 2008 17:22:33 schrieb Ken Basye:
>> I've had this happen
>> often enough that I found the first thing I did when an output
>> difference arose was to print the FP in hex to see if the
>> difference was "real" or just a formatting artifact.
> Nice idea - is that code available somewhere / could you post it?
> As a side note, there is a second big source of bug-like experiences with the
> x86 architecture: floating point values are represented with a higher
> precision inside the FPU (e.g. 80 bits instead of 64), but that probably only
> matters for compiled programs where the compiler may (or may not) optimize
> intermediate, truncating FPU->memory operations away (which leads to
> differing results and is one of the most often reported "bugs" of the GCC
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Numpy-discussion