[Numpy-discussion] Re: Why floor and ceil change the type of the array?

Sasha ndarray at mac.com
Thu Feb 23 00:42:05 CST 2006


On 2/23/06, Travis Oliphant <oliphant.travis at ieee.org> wrote:
> How do you propose to determine if the output fits into the data-type?
> Are you proposing to have different output rules for different
> functions.  Sheer madness...  The rules now are (relatively) simple and
> easy to program to.
>
I did not propose that.  I just mentioned the output to argue that the
rule to use minimal floating point type that can represent the input
is an arbitrary one and no better than cast all integers to doubles. 
"Sheer madness...", however is too strong a characterization.  Note
that python (so far) changes the type of the result depending on its
value in some cases:

>>> type(2**30)
<type 'int'>
>>> type(2**32)
<type 'long'>

This is probably unacceptable to numpy for performance reasons, but it
is not madness.

Try to explain the following to someone who is used to python arithmetics:

>>> 2*array(2**62)+array(2*2**62)
0L

> Hardly a good example.  Are you also concerned about the following?
>
>  >>> exp(1000)
> inf
>
>  >>> exp(array(1000,'g'))
> 1.97007111401704699387e+434

No, but I think it is because I am conditioned by C.  To me exp() is a
double-valued function that happened to work for ints with the help of
an implicit cast.  You may object that this is so because C does not
allow function overloading, but C++ does overload exp so that
exp((float)1) is float, and exp((long double)1) is long doubles but
exp((short)1), exp((char)1) and exp((long long)1) are all double.

Both numpy and C++ made an arbitrary design choice.  I find C++ choice
simpler and more natural, but I can live with the numpy choice once
I've learned what it is.




More information about the Numpy-discussion mailing list