[SciPy-user] fft numerical precision

Robert Kern robert.kern at gmail.com
Wed Feb 15 16:45:27 CST 2006

Ryan Krauss wrote:
> Sorry to bring this back up again (this thread has been dormant since
> 12/21/05), but I think this is the key road block between me and
> really understanding this problem (and I think it went over my head
> when it was originally posted).  If double precision numbers use 52
> bits for the mantissa, what does it mean to say that 2**-52 is the
> approximate fractional precision of double precision floats?  How does
> this affect optimization routines like fzero and fmin so that their
> accuracy seems limited to 2**-52 when the smallest possible floating
> point number should be more like 2**-1074.  Does this somehow relate
> to how floating point subtraction is done (as relates to my post from
> earlier today).

I don't have time right now to go into a lecture on condition numbers and other
such stuff, but let give you an example to chew on and a good reference:

In [23]: a = rand(1024)

In [24]: ifft(fft(a))
array([ 0.27261246 -4.33680869e-17j,  0.08520686 -1.04083409e-17j,
        0.66654776 +6.60887690e-17j, ...,  0.05895311 +8.37004077e-17j,
        0.40454606 -2.34796237e-17j,  0.38051768 +8.63024929e-17j])

In [25]: a
array([ 0.27261246,  0.08520686,  0.66654776, ...,  0.05895311,
        0.40454606,  0.38051768])

In [26]: a *= 1e16

In [27]: ifft(fft(a))
array([  2.72612457e+15+0.21875j   ,   8.52068614e+14-0.359375j  ,
         6.66547756e+15-0.11579423j, ...,   5.89531079e+14+1.16015625j,
         4.04546057e+15+0.7886458j ,   3.80517684e+15+0.71875j   ])

The first volume of _Numerical Computation_ by Christoph W. Ueberhuber
(_Computer-Numerik 1_ in the original German) covers these topics quite well and

Robert Kern
robert.kern at gmail.com

"In the fields of hell where the grass grows high
 Are the graves of dreams allowed to die."
  -- Richard Harter

More information about the SciPy-user mailing list