[Numpy-discussion] Unexpected float96 precision loss
Wed Sep 1 16:15:22 CDT 2010
Wed, 01 Sep 2010 16:26:59 -0400, Michael Gilbert wrote:
> I've been using numpy's float96 class lately, and I've run into some
> strange precision errors.
> >>> x = numpy.array( [0.01] , numpy.float96 )
> I would expect the float96 calculation to also produce 0.0 exactly as
> found in the float32 and float64 examples. Why isn't this the case?
(i) It is not possible to write long double literals in Python.
"float96(0.0001)" means in fact "float96(float64(0.0001))"
(ii) It is not possible to represent numbers 10^-r, r > 1 exactly
in base-2 floating point.
So if you write "float96(0.0001)", the result is not the float96 number
closest to 0.0001, but the 96-bit representation of the 64-bit number
closest to 0.0001. Indeed,
>>> float96(0.0001), float96(1.0)/1000
More information about the NumPy-Discussion