[Numpy-discussion] problem with float64's str()
Thu Apr 10 20:06:39 CDT 2008
On Thu, Apr 10, 2008 at 7:57 PM, Charles R Harris
> On Thu, Apr 10, 2008 at 6:38 PM, Robert Kern <email@example.com> wrote:
> > On Thu, Apr 10, 2008 at 7:31 PM, Charles R Harris
> > <firstname.lastname@example.org> wrote:
> > > > That said, str(float_numpy_scalar) really should have the same rules
> > > > as str(some_python_float).
> > >
> > > For all different precisions?
> > No. I should have said str(float64_numpy_scalar). I am content to
> > leave the other types alone.
> > > And what should the rules be.
> > All Python does is use a lower decimal precision for __str__ than
> > > I note that
> > > numpy doesn't distinguish between repr and str, maybe we could specify
> > > different behavior for the two.
> > Yes, precisely.
> Well, I know where to do that and have a ticket for it. What I would also
> like to do is use float.h for setting the repr precision, but I am not sure
> I can count on its presence as it only became part of the spec in 1999. Then
> again, that's almost ten years ago. Anyway, python on my machine generates
> 12 significant digits. Is that common to everyone?
Here is the relevant portion of Objects/floatobject.c:
/* Precisions used by repr() and str(), respectively.
The repr() precision (17 significant decimal digits) is the minimal number
that is guaranteed to have enough precision so that if the number is read
back in the exact same binary value is recreated. This is true for IEEE
floating point by design, and also happens to work for all other modern
The str() precision is chosen so that in most cases, the rounding noise
created by various operations is suppressed, while giving plenty of
precision for practical use.
#define PREC_REPR 17
#define PREC_STR 12
svn blame tells me that those have been there unchanged since 1999.
You may want to steal the function format_float() that is defined in
that file, too.
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the Numpy-discussion