[SciPy-dev] modulo operation and new scipy core
oliphant at ee.byu.edu
Wed Oct 12 03:46:11 CDT 2005
Arnd Baecker wrote:
>one thing which I find irritating is the behaviour of
>the modulo operation for arrays:
>In : from scipy import *
>In : -0.4 % 1.0
>In : x=arange(-0.6,1.0,0.1)
>In : x%1.0
>array([ -6.00000000e-01, -5.00000000e-01, -4.00000000e-01,
> -3.00000000e-01, -2.00000000e-01, -1.00000000e-01,
> 1.11022302e-16, 1.00000000e-01, 2.00000000e-01,
> 3.00000000e-01, 4.00000000e-01, 5.00000000e-01,
> 6.00000000e-01, 7.00000000e-01, 8.00000000e-01,
>Even worse (IMHO): take a scalar (I know it is still an array,
>but it does not look like one ;-) from the array
Actually it is a real scalar (it's just using the array math right now).
>In : x
>In : x % 1.0
>It seems that for arrays % behaves like `fmod` and not like `mod`.
Yes, that has been the behavior of Numeric. There is the mod function for
arrays. Should we switch that? It will cause a couple of
incompatibilities if people relied on the old (arguably) non-standard
>I find this confusing as it is in contrast to the
>python 2.4 documentation:
>"5.6. Binary arithmetic operations"
> """The % (modulo) operator yields the remainder from the division
> of the first argument by the second. [...]
> The arguments may be floating point numbers, e.g.,
> 3.14%0.7 equals 0.34 (since 3.14 equals 4*0.7 + 0.34.)
> The modulo operator always yields a result with the same sign as
> its second operand (or zero); the absolute value of the result
> is strictly smaller than the absolute value of the second
>Would it be possible for the new scipy core that % behaves
>the same (standard python) way for scalars and for arrays?
I would do this. What do others think?
More information about the Scipy-dev