[SciPy-Dev] Accuracy of single-precision FFT
Charles R Harris
Thu Jun 24 12:14:09 CDT 2010
On Thu, Jun 24, 2010 at 11:02 AM, Charles R Harris <
> On Thu, Jun 24, 2010 at 10:22 AM, Pauli Virtanen <firstname.lastname@example.org> wrote:
>> Thu, 24 Jun 2010 22:31:52 +0800, Ralf Gommers wrote:
>> [clip: significant errors in float32 fft in Scipy]
>> >> 1 0.0
>> >> 17 4.76837e-07
>> >> 37 2.98023e-06
>> >> 97 0.000104427
>> >> 313 0.000443935
>> >> 701 0.00112867
>> >> 1447 0.00620008
>> >> 2011 0.0138307
>> >> 3469 0.16958
>> >> So even decimal=4 would fail for 97 already. For larger primes the FFT
>> >> should be slower but not less accurate, right?
>> > Any opinion on this? Is it easily fixable? This is the last thing
>> > holding up 0.8.0 I think, can we mark it knownfail for that or does
>> > anyone think it's important enough to delay the release for?
>> IIRC, single precision (float32) FFT is a new feature in Scipy 0.8, and
>> was not present in earlier releases. I think Numpy and previous versions
>> of Scipy were doing the FFT all the time in double precision (check
>> There are now two possibilities:
>> 1) the single precision FFT in Scipy works incorrectly,
>> 2) the single precision FFT in Scipy works correctly, but the precision
>> unavoidably sucks for large arrays.
>> I guess (2) is more likely here.
> I think so too, but I don't think the source of the error is in the array
> multiplication, but rather in the generation of the array entries, which is
> likely done in single precision using recursion. I haven't checked that
Which is to say, I suspect it is fixable. There are various methods of
generating the needed cosines and sines that have improved error bounds. The
easiest would be simply using double precision and casting the results to
single, but there are other tricks that can improve the accuracy of the
double precision results also.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-Dev