[Numpy-discussion] Random int64 and float64 numbers

David Goldsmith d.l.goldsmith@gmail....
Sun Nov 8 04:07:55 CST 2009


Thanks, Sturla, you've confirmed what I thought was the case (and explained
more thoroughly the answer others gave more succinctly, but also more
opaquely). :-)

DG

On Sat, Nov 7, 2009 at 11:40 AM, Sturla Molden <sturla@molden.no> wrote:

> David Cournapeau wrote:
> > On Fri, Nov 6, 2009 at 6:54 AM, David Goldsmith <d.l.goldsmith@gmail.com>
> wrote:
> >
> >> Interesting thread, which leaves me wondering two things: is it
> documented
> >> somewhere (e.g., at the IEEE site) precisely how many *decimal*
> mantissae
> >> are representable using the 64-bit IEEE standard for float
> representation
> >> (if that makes sense); and are such decimal mantissae uniformly
> distributed
> >>
> >
> > They are definitely not uniformly distributed: that's why two numbers
> > are close around 1 when they have only a few EPS difference, but
> > around 1e100, you have to add quite a few EPS to even get a different
> > number at all.
> >
> > That may be my audio processing background, but I like to think about
> > float as numbers which have the same relative precision at any "level"
> > - a kind of dB scale. If you want numbers with a fixed number of
> > decimals, you need a fixed point representation.
> >
>
> David Godsmith was asking about the mantissae. For a double, that is a
> 53 bit signed integer. I.e. you have 52 bit fractional part (bit 0-51),
> 11 bit exponent (bit 52-62), and one sign bit (bit 63). The mantissae is
> uniformly distributed like any signed integer. The mantissae of a double
> have 2**53 different integer values: -2**52 to 2**52-1.
>
> But the value of a floating point number is
>
>   value = (-1)**signbit * 2**(exponent - bias) * (1 - fraction)
>
> with bias = 1023 for a double. Thus, floating point numbers are not
> uniformly distributed, but the mantissae is.
>
> For numerical illiterates this might come as a surprise. But in
> numerical mathematics, the resolution is in the number of "significant
> digits", not in "the number of decimals". 101 and .00201 have the same
> numerical precision.
>
> A decimal, on the other hand, can be thought of as a floating point
> number using base-10 instead of base-2 for the exponent:
>
>   value = (-1)**signbit * 10**(exponent - bias) * (1 - fraction)
>
> Decimals and floats are not fundamentally different. There are number
> exactly representable with a decimal that cannot be exactly represented
> with a float. But numerical computation do not become more precise with
> a decimal than a float.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20091108/def6f55b/attachment.html 


More information about the NumPy-Discussion mailing list