[Numpy-discussion] Floating Point Difference between numpy and numarray
Tue Sep 9 01:53:30 CDT 2008
Forgot to answer last week, I was under a fair bit of pressure time wise,
but thanks for your input. I sorted it all in the end and just in time, but
the main issue here was the change from numarray to numpy. Previously where
a typecode of 'f' was used in numarray, the calculation was performed in
double precision whereas in numpy it was calculated in single precision.
Hence when migrating the code, the differences popped up, which were fairly
big when considering the size and number of mean calcs we perform.
I now have a distinct dislike of float values (it'll probably wear off over
time), how can the sum of 100,000 numbers be anything other than the sum of
those numbers. I know the reasoning, as highlighted by the couple of other
e-mails we have had, but I feel the default should probably lean towards
accuracy than speed. 2.0+2.0=4.0 and 2.0+2.0.....=200,000.0 not 2array.sum()
Just an opinion though
2008/9/4 David Cournapeau <email@example.com>
> Hanni Ali wrote:
> > Hi,
> > Is there a way I can set numpy to use dtype='float64' throughout all
> > my code or force it to use the biggest datatype without adding the
> > dtype='float64' to every call to mean, stdev etc..
> Since it is the default type for the functions you mention, you can just
> remove any call to dtype, but removing all the call to dtype='float64'
> is not much less work than replacing dtype = 'float32' :) More
> seriously, depending on your program, it may not be doable 100 %
> Numpy-discussion mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Numpy-discussion