[SciPy-dev] Question about 64-bit integers being cast to double precision

Stephen Walton stephen.walton at csun.edu
Mon Oct 10 11:18:27 CDT 2005


Travis Oliphant wrote:

>In scipy (as in Numeric), there is the concept of "Casting safely" to a 
>type.  This concept is used when choosing a ufunc, for example. 
>
>My understanding is that a 64-bit integer cannot be cast safely to a 
>double-precision floating point number, because precision is lost in the 
>conversion...The result is that on 64-bit systems, the long double type gets used a 
>lot more.   Is this acceptable? expected?   What do those of you on 
>64-bit systems think?
>  
>
I am not on a 64 bit system but can give you the perspective of someone 
who's thought a lot about floating point precision in the context of 
both my research and of teaching classes on numerical analysis for 
physics majors.  To take your example, and looking at it from an 
experimentalist's viewpoint, sqrt(2) where 2 is an integer has only one 
significant figure, and so casting it to a long double seems like 
extreme overkill.  The numerical analysis community has probably had the 
greatest influence on the design of Fortran, and there sqrt(2) (2 
integer) is simply not defined.  The user must specify sqrt(2.0) to get 
a REAL result, sqrt(2.0d0) to get a DOUBLE PRECISION result.  These 
usually map to IEEE 32 and 64 bit REALs today, respectively, on 32-bit 
hardware and to IEEE 64 and 128 bit (is there such a thing?) on 64-bit 
hardware.  I imagine that if there were an integer square root function 
in Fortran, it would simply round to the nearest integer.  In addition, 
the idea of "casting safely" would, it seems to me, also require sqrt(2) 
to return a double on a 32-bit machine.

The question, I think, is part of the larger question:  to what extent 
should the language leave precision issues under the user's control, and 
to what extent should it make decisions automatically?  A lot of the 
behind-the-scenes stuff which goes on in all the Fortran routines from 
Netlib which are now part of Scipy involve using the machine precision 
to decide on step sizes and other algorithmic choices.  These choices 
become wrong if the underlying language changes precision without 
telling the user, a la C's old habit of automatically casting all floats 
to doubles.

With all that, my vote on Travis's specific question:  if conversion of 
an N-bit integer in scipy_core is required, it gets converted to an 
N-bit float.  The only cases in which precision will be lost is if the 
integer is large enough to require more than (N-e) bits for its 
representation, where e is the number of bits in the exponent of the 
floating point representation.  Those who really need to control 
precision should, in my view, create arrays of the appropriate type to 
begin with.

I suppose these sorts of questions are why there are now special purpose 
libraries for fixed precision numbers.




More information about the Scipy-dev mailing list