[Numpy-discussion] Do we want scalar casting to behave as it does at the moment?
Chris Barker - NOAA Federal
Tue Jan 8 15:17:58 CST 2013
On Tue, Jan 8, 2013 at 12:43 PM, Alan G Isaac <email@example.com> wrote:
>> New users don't use narrow-width dtypes... it's important to remember
> 1. I think the first statement is wrong.
> Control over dtypes is a good reason for
> a new use to consider NumPy.
> Because NumPy supports broadcasting,
> it is natural for array-array operations and
> scalar-array operations to be consistent.
> I believe anything else will be too confusing.
Theoretically true -- but in practice, the problem arrises because it
is easy to write literals with the standard python scalars, so one is
very likely to want to do:
arr = np.zeros((m,n), dtype=np.uint8)
arr += 3
and not want an upcast.
I don't think we want to require that to be spelled:
arr += np.array(3, dtype=np.uint8)
so that defines desired behaviour for array<->scalar.
but what should this do?
arr1 = np.zeros((m,n), dtype=np.uint8)
arr2 = np.zeros((m,n), dtype=np.uint16)
arr1 + arr2
arr2 + arr1
upcast in both cases?
use the type of the left operand?
raise an exception?
matching the array<-> scalar approach would mean always keeping the
smallest type, which is unlikely to be what is wanted.
Having it be dependent on order would be really ripe fro confusion.
raising an exception might have been the best idea from the beginning.
(though I wouldn't want that in the array<-> scalar case).
So perhaps having a scalar array distinction, while quite impure, is
the best compromise.
NOTE: no matter how you slice it, at some point reducing operations
produce something different (that can no longer be reduced), so I do
think it would be nice for rank-zero arrays and scalars to be the same
thing (in this regard and others)
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
More information about the NumPy-Discussion