[SciPy-dev] Question about 64-bit integers being cast to double precision
stephen.walton at csun.edu
Wed Oct 26 11:00:34 CDT 2005
Arnd Baecker wrote:
>I also think that double should be kept as default.
>If I understand things correctly, both normal python and
>all the libraries for scipy can only deal with that at the moment.
I respectfully disagree that double should be the default target for
upcasts. This is a holdover from C and was a bad decision when made for
that language. And, as Pearu points out, it has dire consequences for
storage. If I get a 16 Megapixel image from HST with two-byte integers,
I definitely would not want that image upcast to 64 or, heaven forfend,
128 bits the first time I did an operation on it.
More information about the Scipy-dev