[SciPy-dev] Question about 64-bit integers being cast to double precision
cimrman3 at ntc.zcu.cz
Thu Oct 27 01:20:27 CDT 2005
Charles R Harris wrote:
> On Wed, 2005-10-26 at 09:00 -0700, Stephen Walton wrote:
>>Arnd Baecker wrote:
>>>I also think that double should be kept as default.
>>>If I understand things correctly, both normal python and
>>>all the libraries for scipy can only deal with that at the moment.
>>I respectfully disagree that double should be the default target for
>>upcasts. This is a holdover from C and was a bad decision when made for
>>that language. And, as Pearu points out, it has dire consequences for
>>storage. If I get a 16 Megapixel image from HST with two-byte integers,
>>I definitely would not want that image upcast to 64 or, heaven forfend,
>>128 bits the first time I did an operation on it.
> I think there are two goals here: 1) it just works 2) it is efficient.
> These goals are not always compatible. In order to just work, certain
> defaults need to be assumed. Python works like that, it is one of the
> reasons it is so convenient. On the other hand, efficiency, space
> efficiency in particular, requires greater control on the part of the
> programmer who has to take the trouble pick the types he wants to use,
> making a trade between precision, space, and speed. So I think that we
> should choose reasonable defaults that carry on the Python spirit, while
> leaving open options for the programmer who wants more control. How to
> do this without making a mess is the question.
Maybe the arrays could have some 'manual type control' flag (which could
be set on e.g. when explicitly stating type in an array constructor) -
then 1) everything would just work and 2) a user could always set
'manual on', causing all ops on that array to return the array of the
same (or given (via rtype?)) type. I know, it still does not solve how
to do the 'it just works' part.
with just my 2 cents,
More information about the Scipy-dev