[SciPy-dev] Question about 64-bit integers being cast to double precision
Charles R Harris
charlesr.harris at gmail.com
Wed Oct 26 13:48:14 CDT 2005
On 10/26/05, Fernando Perez <Fernando.Perez at colorado.edu> wrote:
> Charles R Harris wrote:
> > Now, python does the following:
> >>>>from math import *
> > 1.4142135623730951
> > and if we are going to overload sqrt we should keep this precision. Do
> > we really want to make a distinction in this case between math.sqrt and
> > Numeric.sqrt ? I myself don't think so. On the other hand, it is
> > reasonable that scipy not promote float types in this situation.
> > Integral types remain a problem. What about uint8 vs uint64 for
> > instance?
> Again, I find it simplest to think about this problem in terms of
> exact/approximate numbers. All integer types (of any bit-width) are exact,
> all float numbers are approximate. The question is then how to handle
> functions, which can be (in terms of their domain/range relation):
> 1. f : exact -> exact
> 2. f : exact -> approximate
> My argument is that for #2, there should be upcasting to the widest
> approximate type, in an attempt to preserve as much of the original
> information as we can. For example, sqrt(2) should upcast to double,
> truncation to integer makes very little practical sense.
Yes, I agree with this. The only problem I see is if someone wants to save
space when taking the sqrt of an integral array. There are at least three
1. cast the result to a float
2. cast the argument to a float
3. use a special sqrtf function
The first two options use more temporary space, take more time, and look
uglier (IMHO). On the other hand, the needed commands are already
implemented. The last option is clear and concise, but needs a new ufunc.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Scipy-dev