[SciPy-User] Hilbert transform
Mon Aug 29 13:06:02 CDT 2011
On Mon, Aug 29, 2011 at 12:38, Alacast <firstname.lastname@example.org> wrote:
> I'm doing some analyses on sets of real-valued time series in which I want
> to know the envelope/instantaneous amplitude of each series in the set.
> Consequently, I've been taking the Hilbert transform (using
> scipy.signal.hilbert), then taking the absolute value of the result.
> The problem is that sometimes this process is far too slow. These time
> series can have on the order of 10^5 to 10^6 data points, and the sets can
> have up to 128 time series. Some datasets have been taking an hour or hours
> to compute on a perfectly modern computing node (1TB of RAM, plenty of
> 2.27Ghz cores, etc.). Is this expected behavior?
> I learned that Scipy's Hilbert transform implementation uses FFT, and that
> Scipy's FFT implementation can run in O(n^2) time when the number of time
> points is prime. This happened in a few of my datasets, but I've now
> included a check and correction for that (drop the last data point, so now
> the number is even and consequently not prime). Still, I observe a good
> amount of variability in run times, and they are rather long. Thoughts?
Having N be prime is just the extreme case. Basically, the FFT
recursively computes the DFT. It can only recurse on integral factors
of N, so any prime factor M must be computed the slow way, taking
O(M^2) steps. You probably have large prime factors sitting around. A
typical approach is to pad your signal with 0s until the next power of
2 or other reasonably-factorable size.
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the SciPy-User