[Numpy-discussion] np.longlong casts to int

Francesc Alted francesc@continuum...
Thu Feb 23 06:23:29 CST 2012


On Feb 23, 2012, at 6:06 AM, Francesc Alted wrote:
> On Feb 23, 2012, at 5:43 AM, Nathaniel Smith wrote:
> 
>> On Thu, Feb 23, 2012 at 11:40 AM, Francesc Alted <francesc@continuum.io> wrote:
>>> Exactly.  I'd update this to read:
>>> 
>>> float96    96 bits.  Only available on 32-bit (i386) platforms.
>>> float128  128 bits.  Only available on 64-bit (AMD64) platforms.
>> 
>> Except float96 is actually 80 bits. (Usually?) Plus some padding…
> 
> Good point.  The thing is that they actually use 96 bit for storage purposes (this is due to alignment requirements).
> 
> Another quirk related with this is that MSVC automatically maps long double to 64-bit doubles:
> 
> http://msdn.microsoft.com/en-us/library/9cx8xs15.aspx
> 
> Not sure on why they did that (portability issues?).

Hmm, yet another quirk (this time in NumPy itself).  On 32-bit platforms:

In [16]: np.longdouble
Out[16]: numpy.float96

In [17]: np.finfo(np.longdouble).eps
Out[17]: 1.084202172485504434e-19

while on 64-bit ones:

In [8]: np.longdouble
Out[8]: numpy.float128

In [9]: np.finfo(np.longdouble).eps
Out[9]: 1.084202172485504434e-19

i.e. NumPy is saying that the eps (machine epsilon) is the same on both platforms, despite the fact that one uses 80-bit precision and the other 128-bit precision.  For the 80-bit, the eps should be ():

In [5]: 1 / 2**63.
Out[5]: 1.0842021724855044e-19

[http://en.wikipedia.org/wiki/Extended_precision]

which is correctly stated by NumPy, while for 128-bit (quad precision), eps should be:

In [6]: 1 / 2**113.
Out[6]: 9.62964972193618e-35

[http://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format]

If nobody objects, I'll file a bug about this.

-- Francesc Alted





More information about the NumPy-Discussion mailing list