[Numpy-discussion] Re: Why floor and ceil change the type of the array?

Sasha ndarray at mac.com
Wed Feb 22 22:47:04 CST 2006

On 2/23/06, Robert Kern <robert.kern at gmail.com> wrote:
> > I cannot really think of any reason for the current numpy behaviour
> > other than the consistency with transcendental functions.
> It's simply the easiest thing to do with the ufunc machinery.
That's what I had in mind with the curent rule the same code can be
use for ceil as for sin.  However, easiest to implement is not
necessarily right.

> > Speaking of
> > which, can someone explain this:
> >
> >>>>sin(array(1,'h')).dtype
> >
> > dtype('<f4')
> >
> >>>>sin(array(1,'i')).dtype
> >
> > dtype('<f8')
> AFAICT, the story goes like this: sin() has two implementations, one for
> single-precision floats and one for doubles. The ufunc machinery sees the int16
> and picks single-precision as the smallest type of the two that can fit an int16
> without losing precision. Naturally, you probably want the function to operate
> in higher precision, but that's not really information that the ufunc machinery
> knows about.

According to your theory long (i8) integers should cast to long doubles, but
>>> sin(array(0,'i8')).dtype

Given that python's floating point object is a double, I think it
would be natural to cast integer arguments to double for all sizes.  I
would also think that in choosing the precision for a function it is
also important that the output fits into data type.  I find the
following unfortunate:

>>> exp(400)
>>> exp(array(400,'h'))

More information about the Numpy-discussion mailing list