[Numpy-discussion] Do we want scalar casting to behave as it does at the moment?
Wed Jan 2 05:24:10 CST 2013
This discussion seems to have petered out without reaching consensus
one way or another. This seems like an important issue, so I've opened
Hopefully this way we'll at least not forget about it; also I tried to
summarize the main issues there and would welcome comments.
On Mon, Nov 12, 2012 at 7:54 PM, Matthew Brett <email@example.com> wrote:
> I wanted to check that everyone knows about and is happy with the
> scalar casting changes from 1.6.0.
> Specifically, the rules for (array, scalar) casting have changed such
> that the resulting dtype depends on the _value_ of the scalar.
> Mark W has documented these changes here:
> Specifically, as of 1.6.0:
> In : arr = np.array([1.], dtype=np.float32)
> In : (arr + (2**16-1)).dtype
> Out: dtype('float32')
> In : (arr + (2**16)).dtype
> Out: dtype('float64')
> In : arr = np.array([1.], dtype=np.int8)
> In : (arr + 127).dtype
> Out: dtype('int8')
> In : (arr + 128).dtype
> Out: dtype('int16')
> There's discussion about the changes here:
> It seems to me that this change is hard to explain, and does what you
> want only some of the time, making it a false friend.
> Is it the right behavior for numpy 2.0?
> NumPy-Discussion mailing list
More information about the NumPy-Discussion