[Numpy-discussion] Do we want scalar casting to behave as it does at the moment?
Mon Nov 12 22:15:48 CST 2012
On Monday, November 12, 2012, Olivier Delalleau wrote:
>> > Hi,
>> > I wanted to check that everyone knows about and is happy with the
>> > scalar casting changes from 1.6.0.
>> > Specifically, the rules for (array, scalar) casting have changed such
>> > that the resulting dtype depends on the _value_ of the scalar.
>> > Mark W has documented these changes here:
>> > http://docs.scipy.org/doc/numpy/reference/ufuncs.html#casting-rules
>> > Specifically, as of 1.6.0:
>> > In : arr = np.array([1.], dtype=np.float32)
>> > In : (arr + (2**16-1)).dtype
>> > Out: dtype('float32')
>> > In : (arr + (2**16)).dtype
>> > Out: dtype('float64')
>> > In : arr = np.array([1.], dtype=np.int8)
>> > In : (arr + 127).dtype
>> > Out: dtype('int8')
>> > In : (arr + 128).dtype
>> > Out: dtype('int16')
>> > There's discussion about the changes here:
>> > http://mail.scipy.org/pipermail/numpy-discussion/2011-March/055156.html
>> > It seems to me that this change is hard to explain, and does what you
>> > want only some of the time, making it a false friend.
>> The old behaviour was that in these cases, the scalar was always cast
>> to the type of the array, right? So
>> np.array(, dtype=np.int8) + 256
>> returned 1? Is that the behaviour you prefer?
>> I agree that the 1.6 behaviour is surprising and somewhat
>> inconsistent. There are many places where you can get an overflow in
>> numpy, and in all the other cases we just let the overflow happen. And
>> in fact you can still get an overflow with arr + scalar operations, so
>> this doesn't really fix anything.
>> I find the specific handling of unsigned -> signed and float32 ->
>> float64 upcasting confusing as well. (Sure, 2**16 isn't exactly
>> representable as a float32, but it doesn't *overflow*, it just gives
>> you 2.0**16... if I'm using float32 then I presumably don't care that
>> much about exact representability, so it's surprising that numpy is
>> working to enforce it, and definitely a separate decision from what to
>> do about overflow.)
>> None of those threads seem to really get into the question of what the
>> best behaviour here *is*, though.
>> Possibly the most defensible choice is to treat ufunc(arr, scalar)
>> operations as performing an implicit cast of the scalar to arr's
>> dtype, and using the standard implicit casting rules -- which I think
>> means, raising an error if !can_cast(scalar, arr.dtype,
> I like this suggestion. It may break some existing code, but I think it'd
> be for the best. The current behavior can be very confusing.
> -=- Olivier
"break some existing code"
I really should set up an email filter for this phrase and have it send
back an email automatically: "Are you nuts?!"
We just resolved an issue where the "safe" casting rule unexpectedly broke
existing code with regards to unplaced operations. The solution was to
warn about the change in the upcoming release and to throw errors in a
later release. Playing around with fundemental things like this need to be
done methodically and carefully.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion