[Numpy-discussion] Do we want scalar casting to behave as it does at the moment?
Tue Jan 8 15:24:52 CST 2013
2013/1/8 Sebastian Berg <email@example.com>:
> On Tue, 2013-01-08 at 19:59 +0000, Nathaniel Smith wrote:
>> On 8 Jan 2013 17:24, "Andrew Collette" <firstname.lastname@example.org> wrote:
>> > Hi,
>> > > I think you are voting strongly for the current casting rules, because
>> > > they make it less obvious to the user that scalars are different from
>> > > arrays.
>> > Maybe this is the source of my confusion... why should scalars be
>> > different from arrays? They should follow the same rules, as closely
>> > as possible. If a scalar value would fit in an int16, why not add it
>> > using the rules for an int16 array?
>> The problem is that rule for arrays - and for every other party of
>> numpy in general - are that we *don't* pick types based on values.
>> Numpy always uses input types to determine output types, not input
>> # This value fits in an int8
>> In : a = np.array()
>> # And yet...
>> In : a.dtype
>> Out: dtype('int64')
>> In : small = np.array(, dtype=np.int8)
>> # Computing 1 + 1 doesn't need a large integer... but we use one
>> In : (small + a).dtype
>> Out: dtype('int64')
>> Python scalars have an unambiguous types: a Python 'int' is a C
>> 'long', and a Python 'float' is a C 'double'. And these are the types
>> that np.array() converts them to. So it's pretty unambiguous that
>> "using the same rules for arrays and scalars" would mean, ignore the
>> value of the scalar, and in expressions like
>> np.array(, dtype=np.int8) + 1
>> we should always upcast to int32/int64. The problem is that this makes
>> working with narrow types very awkward for no real benefit, so
>> everyone pretty much seems to want *some* kind of special case. These
>> are both absolutely special cases:
>> numarray through 1.5: in a binary operation, if one operand has
>> ndim==0 and the other has ndim>0, ignore the width of the ndim==0
>> 1.6, your proposal: in a binary operation, if one operand has ndim==0
>> and the other has ndim>0, downcast the ndim==0 item to the smallest
>> width that is consistent with its value and the other operand's type.
> Well, that leaves the maybe not quite implausible proposal of saying
> that numpy scalars behave like arrays with ndim>0, but python scalars
> behave like they do in 1.6. to allow for easier working with narrow
I know I already said it, but I really think it'd be a bad idea to
have a different behavior between Python scalars and Numpy scalars,
because I think most people would expect them to behave the same (when
knowing what dtype is a Python float / int). It could lead to very
tricky bugs to handle them differently.
More information about the NumPy-Discussion