[SciPy-dev] FFTW performances in scipy and numpy
Thu Aug 2 08:41:00 CDT 2007
On 01/08/07, John Travers <firstname.lastname@example.org> wrote:
> On 01/08/07, David Cournapeau <email@example.com> wrote:
> > John Travers wrote:
> > > Another strategy worth trying is using FFTW_MEASURE once and then
> > > using FFTW_ESTIMATE for additional arrays. FFTW accumulates wisdom and
> > > so the initial call with MEASURE means that further estimated plans
> > > also benefit. In my simple tests it comes very close to measuring for
> > > each individual array.
> > >
> > Is this true for different arrays size ?
> Yes it is, in fact, if you use the fftw_flops function you find that
> the number of operations required is identical if you plan with
> measure, plan with estimate (with experience at the same size) or if
> you plan with estimate with experience at a different size. Of course
> this is only on my machine (AMD Athlon 64 3200+). The only extra
> overhead is the planning for each fft (and I haven't tried a
> comparison with unaligned data). This overhead appears to be about 10%
> for small (2**15) size arrays.
I realized as I was cycling home last night that fftw_flops function
doesn't quite do what I thought. From timing measurements (using your
cycles.h) I've found that the accumulated wisdom doesn't seem to
significantly help for different sizes. In fact, I've found that
planning with wisdom (but still using FFTW_ESTIMATE) increases the
total time as the planning time increases. Though this is only
significant for small arrays, as the wisdom does help for same size
arrays which are large.
Anyway, back to work for me.
More information about the Scipy-dev