[Numpy-discussion] Re: Why floor and ceil change the type of the array?
Travis Oliphant
oliphant.travis at ieee.org
Wed Feb 22 23:23:06 CST 2006
Sasha wrote:
>On 2/23/06, Robert Kern <robert.kern at gmail.com> wrote:
>
>
>>>I cannot really think of any reason for the current numpy behaviour
>>>other than the consistency with transcendental functions.
>>>
>>>
>>It's simply the easiest thing to do with the ufunc machinery.
>>
>>
>>AFAICT, the story goes like this: sin() has two implementations, one for
>>single-precision floats and one for doubles. The ufunc machinery sees the int16
>>and picks single-precision as the smallest type of the two that can fit an int16
>>without losing precision. Naturally, you probably want the function to operate
>>in higher precision, but that's not really information that the ufunc machinery
>>knows about.
>>
>>
>
>According to your theory long (i8) integers should cast to long doubles, but
>
>
Robert is basically right, except there is a special case for long
integers because long doubles are not cross platform. The relevant code
is PyArray_CanCastSafely. This is basically the coercion rule table.
You will notice the special checks for long double placed there after it
was noticed that on 64-bit platforms long doubles were cropping up an
awful lot and it was decided that because long doubles are not very
ubiquitous (for example many platforms don't distinguish between long
double and double), we should special-case the 64-bit integer rule.
You can read about it in the archives if you want.
>dtype('<f8')
>
>
>Given that python's floating point object is a double, I think it
>would be natural to cast integer arguments to double for all sizes.
>
Perhaps, but that is not what is done. I don't think it's that big a
deal because to get "different size" integers you have to ask for them
and then you should know that conversion to floating point is not
necessarily a double.
I think the only accetable direction to pursue is to raise an error and
not do automatic upcasting if a ufunc does not have a definition for any
of the given types. But, this is an old behavior from Numeric, and I
would think such changes now would rightly be considered as gratuitous
breakage.
> I
>would also think that in choosing the precision for a function it is
>also important that the output fits into data type.
>
How do you propose to determine if the output fits into the data-type?
Are you proposing to have different output rules for different
functions. Sheer madness... The rules now are (relatively) simple and
easy to program to.
>I find the
>following unfortunate:
>
>
>>>>exp(400)
>>>>
>>>>
>5.2214696897641443e+173
>
>
>>>>exp(array(400,'h'))
>>>>
>>>>
>inf
>
>
Hardly a good example. Are you also concerned about the following?
>>> exp(1000)
inf
>>> exp(array(1000,'g'))
1.97007111401704699387e+434
More information about the Numpy-discussion
mailing list