[SciPy-dev] Question about 64-bit integers being cast to double precision

Fernando Perez Fernando.Perez at colorado.edu
Wed Oct 26 12:38:26 CDT 2005


Charles R Harris wrote:

[...]

> Now, python does the following:
> 
> 
>>>>from math import *
>>>>sqrt(2)
> 
> 1.4142135623730951
> 
> and if we are going to overload sqrt we should keep this precision. Do
> we really want to make a distinction in this case between math.sqrt and
> Numeric.sqrt ? I myself don't think so. On the other hand, it is
> reasonable that scipy not promote float types in this situation.
> Integral types remain a problem. What about uint8 vs uint64 for
> instance?

Again, I find it simplest to think about this problem in terms of 
exact/approximate numbers.  All integer types (of any bit-width) are exact, 
all float numbers are approximate.  The question is then how to handle 
functions, which can be (in terms of their domain/range relation):

1. f : exact -> exact
2. f : exact -> approximate

etc.

My argument is that for #2, there should be upcasting to the widest possible 
approximate type, in an attempt to preserve as much of the original 
information as we can.  For example, sqrt(2) should upcast to double, because 
truncation to integer makes very little practical sense.

The case of accumulators is special, because they are of type 1 above, but the 
result may not (and often doesn't) fit in the input type.  Travis already 
agreed that in this case, an upcast was a reasonable compromise.

However, for functions of the kind

3. f : approx -> approx

there should be in general no upcasting (except for accumulators, as we've 
discussed).  Doing a*b to two float arrays should certainly not produce an 
enormous result, which may not even fit in memory.

Just my opinion.

Cheers,

f




More information about the Scipy-dev mailing list