[Numpy-discussion] [Pytables-users] On Numexpr and uint64 type

Francesc Altet faltet@carabos....
Tue Mar 11 13:14:48 CDT 2008


A Tuesday 11 March 2008, Charles R Harris escrigué:
> On Tue, Mar 11, 2008 at 4:00 AM, Francesc Altet <faltet@carabos.com> 
wrote:
> > A Tuesday 11 March 2008, Francesc Altet escrigué:
> > > The thing that makes uint64 so special is that it is the largest
> > > integer (in current processors) that has a native representation
> > > (i.e. the processor can operate directly on them, so they can be
> > > processed very fast), and besides, there is no other (common
> > > native) type that can fully include all its precision (float64
> > > has a mantissa of 53 bits, so this is not enough to represent 64
> > > bits).  So the problem is basically what to do when operations
> > > with uint64 have overflows (or underflows, like for example,
> > > dealing with negative values).
> >
> > Mmm, I'm thinking now that there exist a relatively common floating
> > point that have a mantissa of 64 bit (at minimum), namely the
> > extended precision ploating point [1] (in its 80-bit incarnation,
> > it is an IEEE standard).  In modern platforms, this is avalaible as
> > a 'long double', and I'm wondering whether it would be useful for
> > Numexpr purposes, but seems like it is.
>
> Extended precision is iffy. It doesn't work on all platforms and even
> when it does the implementation can be strange. I think the normal
> double is the only thing you can count on right now.

I see.  Oh well, this is kind of a mess and after pondering about this 
for a long while, we think that, in the end, a good approach would be 
to simply follow NumPy convention.  It has its pros and cons, but it is 
a well stablished convention anyway, and it is supposed that most of 
the Numexpr/PyTables users should be used to it.

Thanks for the advices,

-- 
>0,0<   Francesc Altet     http://www.carabos.com/
V   V   Cárabos Coop. V.   Enjoy Data
 "-"


More information about the Numpy-discussion mailing list