[Numpy-discussion] Do we want scalar casting to behave as it does at the moment?
Thu Jan 3 18:26:46 CST 2013
On 3 Jan 2013 23:39, "Andrew Collette" <email@example.com> wrote:
> > Consensus in that bug report seems to be that for array/scalar
> > np.array(, dtype=np.int8) + 1000 # can't be represented as an int8!
> > we should raise an error, rather than either silently upcasting the
> > result (as in 1.6 and 1.7) or silently downcasting the scalar (as in
> > 1.5 and earlier).
> I have run into this a few times as a NumPy user, and I just wanted to
> comment that (in my opinion), having this case generate an error is
> the worst of both worlds. The reason people can't decide between
> rollover and promotion is because neither is objectively better. One
> avoids memory inflation, and the other avoids losing precision. You
> just need to pick one and document it. Kicking the can down the road
> to the user, and making him/her explicitly test for this condition, is
> not a very good solution.
> What does this mean in practical terms for NumPy users? I personally
> don't relish the choice of always using numpy.add, or always wrapping
> my additions in checks for ValueError.
To be clear: we're only talking here about the case where you have a mix of
a narrow dtype in an array and a scalar value that cannot be represented in
that narrow dtype. If both sides are arrays then we continue to upcast as
normal. So my impression is that this means very little in practical terms,
because this is a rare and historically poorly supported situation.
But if this is something you're running into in practice then you may have
a better idea than us about the practical effects. Do you have any examples
where this has come up that you can share?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion