[Numpy-discussion] float128 in fact float80

Nathaniel Smith njs@pobox....
Sun Oct 16 19:13:48 CDT 2011


On Sun, Oct 16, 2011 at 4:29 PM, Charles R Harris
<charlesr.harris@gmail.com> wrote:
> On Sun, Oct 16, 2011 at 4:16 PM, Nathaniel Smith <njs@pobox.com> wrote:
>> I understand the argument that you don't want to call it "float80"
>> because not all machines support a float80 type. But I don't
>> understand why we would solve that problem by making up two *more*
>> names (float96, float128) that describe types that *no* machines
>> actually support... this is incredibly confusing.
>
> Well, float128 and float96 aren't interchangeable across architectures
> because of the different alignments, C long double isn't portable either,
> and float80 doesn't seem to be available anywhere. What concerns me is the
> difference between extended and quad precision, both of which can occupy 128
> bits. I've complained about that for several years now, but as to extended
> precision, just don't use it. It will never be portable.

I think part of the confusion here is about when a type is named like
'float<N>', does 'N' refer to the size of the data or to the minimum
alignment? I have a strong intuition that it should be the former, and
I assume Matthew does too. If we have a data structure like
  struct { uint8_t flags; void * data; }
then 'flags' will actually get 32 or 64 bits of space... but we would
never, ever refer to it as a uint32 or a uint64! I know these extended
precision types are even weirder because the compiler will insert that
padding unconditionally, but the intuition still stands, and obviously
some proportion of the userbase will share it.

If our API makes smart people like Matthew spend a week going around
in circles, then our API is dangerously broken!

The solution is just to call it 'longdouble', which clearly
communicates 'this does some quirky thing that depends on your C
compiler and architecture'.

-- Nathaniel


More information about the NumPy-Discussion mailing list