[Numpy-discussion] Do we want scalar casting to behave as it does at the moment?

Dag Sverre Seljebotn d.s.seljebotn@astro.uio...
Mon Jan 7 15:17:45 CST 2013


On 2013-01-07 21:50, Andrew Collette wrote:
> Hi Matthew,
>
>> Just to be clear, you mean you might have something like this?
>>
>> def my_func('array_name', some_offset):
>>     arr = load_somehow('array_name') # dtype hitherto unknown
>>     return arr + some_offset
>>
>> ?  And the problem is that it fails late?   Is it really better that
>> something bad happens for the addition than that it raises an error?
>>
>> You'll also often get an error when trying to add structured dtypes,
>> but maybe you cant return these from a 'load'?
>
> In this specific case I would like to just use "+" and say "We add
> your offset using the NumPy rules," which is a problem if there are 
> no
> NumPy rules for addition in the specific case where some_offset
> happens to be a scalar and not an array, and also slightly larger 
> than
> arr.dtype can hold. I personally prefer upcasting to some reasonable
> type big enough to hold some_offset, as I described earlier, although
> that's not crucial.
>
> But I think we're getting a little caught up in the details of this
> example.  My basic point is: yes, people should be careful to check
> dtypes, etc. where it's important to their application; but people 
> who
> want to rely on some reasonable NumPy-supplied default behavior 
> should
> be excused from doing so.

But the default float dtype is double, and default integer dtype is at 
least
int32.

So if you rely on NumPy-supplied default behaviour you are fine!

If you specify a smaller dtype for your arrays, you have some reason to 
do
that. If you had enough memory to not worry about automatic conversion 
from int8
to int16, you would have specified it as int16 in the first place when 
you created
the array.

Dag Sverre


More information about the NumPy-Discussion mailing list