[Numpy-discussion] [Numpy] quadruple precision
David Cournapeau
cournape@gmail....
Wed Feb 29 13:39:56 CST 2012
On Wed, Feb 29, 2012 at 10:22 AM, Paweł Biernat <pwl_b@wp.pl> wrote:
> I am completely new to Numpy and I know only the basics of Python, to
> this point I was using Fortran 03/08 to write numerical code. However,
> I am starting a new large project of mine and I am looking forward to
> using Python to call some low level Fortran code responsible for most
> of the intensive number crunching. In this context I stumbled into
> f2py and it looks just like what I need, but before I start writing an
> app in mixture of Python and Fortran I have a question about numerical
> precision of variables used in numpy and f2py.
>
> Is there any way to interact with Fortran's real(16) (supported by gcc
> and Intel's ifort) data type from numpy? By real(16) I mean the
> binary128 type as in IEEE 754. (In C this data type is experimentally
> supported as __float128 (gcc) and _Quad (Intel's icc).) I have
> investigated the float128 data type, but it seems to work as binary64
> or binary80 depending on the architecture. If there is currently no
> way to interact with binary128, how hard would it be to patch the
> sources of numpy to add such data type? I am interested only in
> basic stuff, comparable in functionality to libmath.
>
> As said before, I have little knowledge of Python, Numpy and f2py, I
> am however, interested in investing some time in learing it and
> implementing the mentioned features, but only if there is any hope of
> succeeding.
Numpy does not have proper support for the quadruple precision float
numbers, because very few implementation do (no common CPU handle it
in hw, for example).
The dtype128 is a bit confusingly named: the 128 refers to the padding
in memory, but not its "real" precision. It often (but not always)
refer to the long double in the underlying C implementation. The
latter depends on the OS, CPU and compilers.
cheers,
David
More information about the NumPy-Discussion
mailing list