[Numpy-discussion] [Numpy] quadruple precision

Paweł Biernat pwl_b@wp...
Fri Mar 2 04:47:22 CST 2012

Charles R Harris <charlesr.harris <at> gmail.com> writes:

> The quad precision library has been there for a while, and quad
  precision is also supported by the Intel compiler. I don't know
  about MSVC. Intel has been working on adding quad precision to their
  hardware for several years and there is an IEEE spec for it, so some
  day it will be here, but it isn't here yet. It's a bit sad, I could
  use quad precision in FORTRAN on a VAX 25 years ago. Mind, I only
  needed it once ;) I suppose lack of pressing need accounts for the
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion <at> scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

Waiting for hardware support can last forever, and __float128 is
already here. Despite being software supported, it is still reasonably
fast for people who need it. The slow-down depends on a case and
optimization and can be roughly from x2 (using sse) to x10 (without
optimization), but you gain x2 significant digits when compared to
double, see for example
http://locklessinc.com/articles/classifying_floats/. This is still
faster than mpfr for example. And gcc-4.6 already supports __float128
on a number of machines: i386, x86_64, ia64 and HP-UX. Also fftw now
supports binary128: http://www.fftw.org/release-notes.html (although
this might not be the most representative numerical software, it
confirms that it is unlikely that __float128 will be ignored by the
others unless hardware supported).

The portability is broken for numpy.float128 anyway (as I understand,
it behaves in different ways on different architectures), so adding a
new type (call it, say, quad128) that properly supports binary128
shouldn't be a drawback. Later on, when the hardware support for
binary128 shows up, the quad128 will be already there.


More information about the NumPy-Discussion mailing list