[SciPy-user] precision question re float96 and float128

David M. Cooke cookedm@physics.mcmaster...
Tue Oct 2 16:33:34 CDT 2007


Lev Givon <lev@columbia.edu> writes:

> I have numpy 1.0.3.1 installed on a Pentium 4 system running Linux 2.6
> and an Intel Core 2 Duo system running MacOSX 10.4.10. On the former,
> the float96 datatype is defined; on the latter, float128. While
> examining the machine limits with the finfo function on the
> aforementioned hosts, I noticed that the limits for float96 and
> float128 were identical. For example, finfo(float96).precision
> and finfo(float128).precision both returned 18. Is this expected?
> Shouldn't the precision of the latter be greater?

The numbers in float96 and float128 refer to the number of bits of
memory the numbers take up. On the Intel processors, you're not
actually getting 96 or 128 bits of precision -- they're actually a
padded version of the 80 bit internal representation used in the
floating-point units (64 bits of which are the mantissa, which gives
you 10 for the precision). Instead of using 10 bytes of memory, 12 or
16 bytes are used so that type is aligned on a word or doubleword
boundary. For portability purposes, you're better off using
longdouble.

-- 
|>|\/|<
/--------------------------------------------------------------------------\
|David M. Cooke                      http://arbutus.physics.mcmaster.ca/dmc/
|cookedm@physics.mcmaster.ca


More information about the SciPy-user mailing list