[SciPy-User] deconvolution of 1-D signals
Charles R Harris
Mon Aug 1 09:19:34 CDT 2011
On Mon, Aug 1, 2011 at 1:03 AM, David Baddeley
> Hi Ralf,
> 5-15 times smaller would probably be fine, although you might want to watch
> the edges in the reconstruction - if they're at different dc levels you'll
> get edge artifacts (within ~ 1 kernel width of the edges). I'd tend to avoid
> spline filtering (or any form of noise reduction) before deconvolution, as
> this will also transform the data in a way not explained by the model you're
> using to deconvolve with.
> Weiner filtering is a 2 liner ->
> H = fft(kernel)
> deconvolved = ifftshift(ifft(fft(signal)*np.conj(H)/(H*np.conj(H) +
> where lambda is your regularisation parameter, and white noise is assumed.
> There are various methods for choosing lambda optimally, but most people
> tend to use trial and error.
> Iterative methods are typically ~1-2 orders of magnitude slower than a
> Weiner filter, but with fast fft libraries and modern computers still quite
> reasonable for modest data sizes (a 3D image stack of ~ 512x512x50 pixels
> will tend to be done in a bit under a minute, can't really comment on 1D
> data, but unless your signal is very long I'd expect it to be significantly
> quicker). Ffts scale with O(nlogn) so will generally dramatically outperform
> things based on a simple convolution or filtering approaches (O(n**2)) for
> large n. This might make an iterative approach using ffts faster than
> something like scipy.signal.deconvolve if your kernel is large.
The main problem with Weiner filtering is that it assumes that both the
signal and noise are Gaussian. For instance, if you are looking for spikes
in noise, the amplitudes of the spikes would have a Gaussian distribution.
The Weiner filter is then the Bayesian estimate that follows from those
assumptions, but those might not be the best assumptions for the data.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User