[Numpy-discussion] Bug or surprising undocumented behaviour in irfft
Charles R Harris
Wed Aug 29 20:46:32 CDT 2007
On 8/29/07, Anne Archibald <email@example.com> wrote:
> On 29/08/2007, Charles R Harris <firstname.lastname@example.org> wrote:
> > > What is going on is that the coefficient at the Nyquist frequency
> > once in the unextended array, but twice when the array is extended with
> > zeros because of the Hermitean symmetry. That should probably be fixed
> > the upsampling code.
> Is this also appropriate for the other FFTs? (inverse real, complex,
> hermitian, what have you) I have written a quick hack (attached) that
> should do just that rescaling, but I don't know that it's a good idea,
> as implemented. Really, for a complex IFFT it's extremely peculiar to
> add the padding where we do (between frequency -1 and frequency zero);
> it would make more sense to pad at the high frequencies (which are in
> the middle of the array). Forward FFTs, though, can reasonably be
> padded at the end, and it doesn't make much sense to rescale the last
> data point.
It all depends on the data and what you intend. Much of my experience is
with Michaelson interferometers and in that case the interferogram is
essentially an autocorrelation, so it is desirable to keep its center at
sample zero and let the left side wrap around, so ideally you fill in the
middle as you suggest. You can also pad at the end if you don't put the
center at zero, but then you need to phase shift the spectrum in a way that
corresponds to rotating the center to index zero and padding in the middle.
I expect you would want to do the same thing for complex transforms if they
are of real data and do the nyquist divided by two thingy. If the high
frequencies in a complex transform are actually high frequencies and not
aliases of negative frequencies, then you will want to just append zeros.
That case also occurs, I have designed decimating complex filters that
produce output like that, they were like single sideband in the radio world.
> The inverse irfft also scales by dividing by the new transform size
> > of the original size, so the result needs to be scaled up for the
> > interpolation to work. It is easy to go wrong with fft's when the
> > sampling/frequency scales aren't carried with the data. I always do that
> > myself so that the results are independent of transform
> > and expressed in some standard units.
> The scaling of the FFT is a pain everywhere. I always just try it a
> few times until I get the coefficients right. I sort of like FFTW's
> convention of never normalizing anything - it means the transforms
> have nice simple formulas, though unfortunately it also means that
> ifft(fft(A))!=A. In any case the normalization of numpy's FFTs is not
> something that can reasonably be changed, even in the special case of
> the zero-padding inverse (and forward) FFTs.
I usually multiply the forward transform by the sample interval, in secs or
cm, and the unscaled inverse transform by the frequency sample interval, in
Hz or cm^-1. That treats both the forward and inverse fft like
approximations to the integral transforms and makes the units those of
spectral density. If you think trapezoidal rule, then you will also see
factors of .5 at the ends, but that is a sort of apodization that is
consistent with how Fourier series converge at discontinuities. In the
normal case where no interpolation is done the product of the sample
intervals is 1/N, so it reduces to the usual convention. Note that in your
example the sampling interval decreases when you do the interpolation, so if
you did another forward transform it would be scaled down to account for the
extra points in the data.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Numpy-discussion