[Numpy-discussion] Numpy float precision vs Python list float issue

David Cournapeau david@ar.media.kyoto-u.ac...
Mon Apr 20 08:47:09 CDT 2009

Hi Ruben,

Ruben Salvador wrote:
> Hi everybody!
> First of all I should say I am a newbie with Python/Scipy. Have been
> searching a little bit (google and lists) and haven't found a helpful
> answer...so I'm posting.
> I'm using Scipy/Numpy to do image wavelet transforms via the lifting
> scheme. I grabbed some code implementing the transforms with Python
> lists (float type). This code works perfectly, but slow for my needs
> (I'll be doing some genetic algorithms to evolve coefficients of the
> filters and the forward and inverse transform will be done many
> times). It's just implemented by looping in the lists and making
> computations this way. Reconstructed image after doing a forward and
> inverse transform is perfect, this is, original and reconstructed
> images difference is 0.
> With Scipy/Numpy float arrays slicing this code is much faster as you
> know. But the reconstructed image is not perfect. The image difference
> maximum and minimum values returns:
> maximum difference => 3.5527136788e-15
> minimum difference => -3.5527136788e-15
> Is this behavior expected?

Yes, it is expected, it is inherent to how floating point works. By
default, the precision for floating point array is double precision, for
which, in normal settings, a = a + 1e-17.

> Because it seems sooo weird to me.

It shouldn't :) The usual answer is that you should read this:


Floating point is a useful abstraction, but has some strange properties,
which do not matter much in most cases, but can catch you off guard.

> If expected, anyway to override it?

The only way to mitigate it is to use higher precision (depending on
your OS/CPU combination, the long double type can help), or using a type
with arbitrary precision. But in that later case, it will be much, much
slower (as many computations are done in software), and you get a pretty
limited subset of numpy features (enough to implement basic wavelet with
Haar bases, though).

Using extended precision:

import numpy as np
a = np.array([1, 2, 3], dtype=np.longdouble) # this gives you more precision

You can also use the decimal module for (almost) arbitrary precision:

from decimal import Decimal
import numpy as np
a = np.array([1, 2, 3], dtype=Decimal)

But again, please keep in mind that many operations normally available
cannot be done for arbitrary objects like Decimal instances. Generally,
numerical computation are done with a limited precision, and it is
rarely a problem if enough care s taken. If you need theoretically exact
results, then you need a symbolic computation package, which numpy alone
is not.



More information about the Numpy-discussion mailing list