[SciPy-dev] [SciPy-user] operations on int8 arrays
Travis Oliphant
oliphant at ee.byu.edu
Wed Oct 19 15:21:56 CDT 2005
Jon Peirce wrote:
>>>>Scipy arrays with dtype=uint8 or int8 seem to be
>>>>mathematically-challenged on my machine (AMD64 WinXP running python
>>>>2.4.2, scipy core 0.4.1). Simple int (and various others) appear fine.
>>>>
>>>
>>>
>>>>>>> >>>import scipy
>>>>>>> >>>xx=scipy.array([100,100,100],scipy.int8)
>>>>>>> >>>print xx.sum()
>>>>>>
>>>>>>
>>>> 44
>>>
>>>
>>>>>>> >>>xx=scipy.array([100,100,100],scipy.int)
>>>>>>> >>>print xx.sum()
>>>>>>
>>>>>>
>>>> 300
>>>>
>>>>
>>>
>>>
>>This is not a bug. In the first line, you are telling the computer to
>>add up 8-bit integers. The result does not fit in an 8-bit integer ---
>>thus you are computing modulo 256.
>>
>>I suspect you wanted for the first case.
>>
>>xx.sum(rtype=int) -- this will "reduce" using the long integer type on
>>your platform.
>>
>>-Travis
>>
>
> Right, yes. I find it a bit unintuitive though that the resulting
> array isn't automatically converted to a suitable type where
> necessary. Even less intuitive is this:
> >>>import scipy
> >>>xx=scipy.array([100,100,100],scipy.int8)
> >>>print xx.mean()
> 14.666666666666666
Hmm. This is true. But, it is consistent with the behavior of
true_divide which mean uses.
It would be possible to make the default reduce type for integers 32-bit
on 32-bit platforms and 64-bit on 64-bit platforms. the long integer type.
Do people think that would be a good idea? These kinds of questions do
come up.
Or, this could simply be the default when calling the .sum method (which
is add.reduce under the covers). The reduce method could stay with the
default of the integer type.
Obviously, it's going to give "unexpected" results to somebody.
Automatic upcasting can have its downsides. But, perhaps in this case
(integer reductions), it is better to do the upcasting. What do people
think?
-Travis
More information about the Scipy-dev
mailing list