[Numpy-discussion] supporting quad precision
Mon Jun 10 01:49:27 CDT 2013
On Sun, 2013-06-09 at 12:23 +0100, David Cournapeau wrote:
> On Sun, Jun 9, 2013 at 8:35 AM, Henry Gomersall <email@example.com>
> > On Sat, 2013-06-08 at 14:35 +0200, Anne Archibald wrote:
> >> Looking at the rational module, I think you're right: it really
> >> shouldn't be too hard to get quads working as a user type using
> >> __float128 type, which will provide hardware arithmetic in the
> >> unlikely case that the user has hardware quads. Alternatively,
> >> probably more work, one could use a package like qd to provide
> >> portable quad precision (and quad-double).
> > In this vague area, and further to a question I asked a while ago on
> > StackOverflow (http://stackoverflow.com/q/9062562/709852), is there
> > deep reason why on some platforms longdouble is float128 and on
> > it's float96?
> Long double is not standardized (like single/double are), so it is CPU
> dependent. On Intel CPU, long double is generally translated into the
> extended precision 80 bits. On 32 bits, it is aligned to 12 bytes (the
> next multiple of 4 bytes), on 64 bits 16 bytes (the next multiple 8
> bytes). MS compilers are a notable exception where long double ==
> So it depends on the CPU, the OS and the compiler. Using long double
> for anything else than compatibility (e.g. binary files) is often a
> mistake IMO, and highly unportable.
Interesting. So long double consistently maps to the platform specific
With my work on https://github.com/hgomersall/pyFFTW, which supports
long double as one of the data types, numpy's long double is absolutely
the right way to do this. Certainly I've managed reasonable success
across the three main OSs with this approach.
More information about the NumPy-Discussion