[SciPy-dev] Question about 64-bit integers being cast to double precision
Charles R Harris
charles.harris at sdl.usu.edu
Wed Oct 26 11:53:57 CDT 2005
On Wed, 2005-10-26 at 09:00 -0700, Stephen Walton wrote:
> Arnd Baecker wrote:
> >I also think that double should be kept as default.
> >If I understand things correctly, both normal python and
> >all the libraries for scipy can only deal with that at the moment.
> I respectfully disagree that double should be the default target for
> upcasts. This is a holdover from C and was a bad decision when made for
> that language. And, as Pearu points out, it has dire consequences for
> storage. If I get a 16 Megapixel image from HST with two-byte integers,
> I definitely would not want that image upcast to 64 or, heaven forfend,
> 128 bits the first time I did an operation on it.
I think there are two goals here: 1) it just works 2) it is efficient.
These goals are not always compatible. In order to just work, certain
defaults need to be assumed. Python works like that, it is one of the
reasons it is so convenient. On the other hand, efficiency, space
efficiency in particular, requires greater control on the part of the
programmer who has to take the trouble pick the types he wants to use,
making a trade between precision, space, and speed. So I think that we
should choose reasonable defaults that carry on the Python spirit, while
leaving open options for the programmer who wants more control. How to
do this without making a mess is the question.
Now, python does the following:
>>> from math import *
and if we are going to overload sqrt we should keep this precision. Do
we really want to make a distinction in this case between math.sqrt and
Numeric.sqrt ? I myself don't think so. On the other hand, it is
reasonable that scipy not promote float types in this situation.
Integral types remain a problem. What about uint8 vs uint64 for
instance? Maybe we should either require a cast of integral types to a
float type for arrays or define distinct functions like sqrtf and
sqrtl to handle this. I note that a complaint has been made that this
is unwieldy and a throwback, but I don't think so. The integer case is,
after all, ambiguous. The automatic selection of type only really makes
sense for floats or if we explicitly state that maximum precision, but
no more than necessary, should be maintained. But what happens then for
int64 when we have a machine whose default float is double double?
More information about the Scipy-dev