[Numpy-discussion] Re: Numpy-discussion digest, Vol 1 #10 - 10 msgs
gpk at bell-labs.com
Wed Feb 9 10:23:47 CST 2000
> From: "Andrew P. Mullhaupt" <amullhau at zen-pharaohs.com>
> > The upcasting rule thus ensures that
> > 1) No precision is lost accidentally.
> More or less.
> More precisely, it depends on what you call an accident. What happens when
> you add the IEEE single precision floating point value 1.0 to the 32-bit
> integer 2^30? A _lot_ of people don't expect to get the IEEE single
> precision floating point value 2.0^30, but that is what happens in some
> languages. Is that an "upcast"? Would the 32 bit integer 2^30 make more
> sense? Now what about the case where the 32 bit integer is signed and adding
> one to it will "wrap around" if the value remains an integer? Because these
> two examples might make double precision or a wider integer (if available)
> seem the correct answer, suppose it's only one element of a gigantic array?
> Let's now talk about complex values....
It's most important that the rules be simple, and (preferably) close
to common languages. I'd suggest C.
In my book, anyone who carelessly mixes floats and ints deserves
punishment the language metes out.
I've done numeric work in languages where
casting was by request _only_ (e.g., Limbo, for Inferno),
and I found, to my surprise,
that automatic type casting these type casting is only a mild
convenience. Writing code with manual typecasting is surprisingly
easy. Since automatic typecasting only buys a small improvement
in ease of use, I'd want to be extremely sure that it doesn't cause
It's very easy to write some complicated set of rules that wastes more
time (in the form of unexpected, untraceable bugs) than it saves.
By the way, automatic downcasting has a hidden problems if python
is ever set to trap underflow errors. I had a program that would
randomly crash every 10th (or so) time I ran it with a large dataset
(1000x1000 linear algebra). After days of hair-pulling, I found that
the matrix was being converted from double to float at one step,
and about 1 in 10,000,000 of the entries was too small to represent
as a single precision number. That very rare event would underflow,
be trapped, and crash the program with a floating point exception.
More information about the Numpy-discussion