[Numpy-discussion] Random int64 and float64 numbers

Anne Archibald peridot.faceted@gmail....
Thu Nov 5 22:07:41 CST 2009


2009/11/5 David Goldsmith <d.l.goldsmith@gmail.com>:
> On Thu, Nov 5, 2009 at 3:26 PM, David Warde-Farley <dwf@cs.toronto.edu>
> wrote:
>>
>> On 5-Nov-09, at 4:54 PM, David Goldsmith wrote:
>>
>> > Interesting thread, which leaves me wondering two things: is it
>> > documented
>> > somewhere (e.g., at the IEEE site) precisely how many *decimal*
>> > mantissae
>> > are representable using the 64-bit IEEE standard for float
>> > representation
>> > (if that makes sense);
>>
>> IEEE-754 says nothing about decimal representations aside from how to
>> round when converting to and from strings. You have to provide/accept
>> *at least* 9 decimal digits in the significand for single-precision
>> and 17 for double-precision (section 5.6). AFAIK implementations will
>> vary in how they handle cases where a binary significand would yield
>> more digits than that.
>
> I was actually more interested in the opposite situation, where the decimal
> representation (which is what a user would most likely provide) doesn't have
> a finite binary expansion: what happens then, something analogous to the
> decimal "rule of fives"?

If you interpret "0.1" as 1/10, then this is a very general
floating-point issue: how you you round off numbers you can't
represent exactly? The usual answer (leaving aside internal
extended-precision shenanigans) is to round, with the rule that when
you're exactly between two floating-point numbers you round to the one
that's even, rather than always up or always down (the numerical
analysis wizards tell us that this is more numerically stable).

Anne

> DG
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>


More information about the NumPy-Discussion mailing list