[SciPy-User] deconvolution of 1-D signals
Charles R Harris
Mon Aug 1 18:48:06 CDT 2011
On Mon, Aug 1, 2011 at 3:07 PM, Anne Archibald
> On 1 August 2011 10:14, Charles R Harris <email@example.com>
> > On Sun, Jul 31, 2011 at 11:20 PM, Anne Archibald
> > <firstname.lastname@example.org> wrote:
> >> I realize this discussion has gone rather far afield from efficient 1D
> >> deconvolution, but we do a funny thing in radio interferometry, and
> >> I'm curious whether this is normal for other kinds of deconvolution as
> >> well.
> >> In radio interferometry we obtain our images convolved with the
> >> so-called "dirty beam", a convolution kernel that has a nice narrow
> >> peak but usually a chaos of monstrous sidelobes often only marginally
> >> smaller than the main lobe. We use a different regularization
> >> condition to do our deconvolution: we treat the underlying image as a
> >> modest collection of point sources. (One can see why this appeals to
> >> astronomers.) Through an iterative process (the "CLEAN" algorithm and
> >> its many descendants) we obtain an estimate of this underlying image.
> >> But we very rarely actually work with this image directly. We normally
> >> convolve it with a sort of idealized version of our kernel without all
> >> the sidelobes. This then gives an image one might have obtained from a
> >> normal telescope the size of the interferometer array. (Apart from all
> >> the CLEAN artifacts.)
> >> What I'm wondering is, is this final step of convolving with an
> >> idealized version of the kernel standard practice elsewhere?
> > That's interesting. It sounds like fitting a parametric model, which
> > points, followed by a smoothing that in some sense represents the error.
> > there frequency aliasing problems associated with the deconvolution?
> It's very like fitting a parametric model, yes, except that we don't
> care much about the model parameters. In fact we often end up with
> models that have clusters of "point sources" with positive and
> negative emissions trying to match up with what is in reality a single
> point source. This can be due to inadequacies of the dirty beam model
> (though usually we have a decent estimate) or simply noise. In any
> case smoothing with an idealized main lobe makes us much less
> sensitive to this kind of junk. Plus if you're going to do this
> anyway, it can make life much easier to constrain your point sources
> to a grid.
> (As an aside, this trick - of fitting a parametric model but then
> extracting "observational" parameters for comparison to reduce
> model-sensitivity - came up with some X-ray spectral data I was
> looking at: you need to use a model to pull out the instrumental
> effects, but if you report (say) the model luminosity in a band your
> instrument can detect, then it doesn't much matter whether your model
> thinks the photons are thermal or power-law. In principle you can even
> do this trick with published model parameters, but you run into the
> problem that people don't give full covariance matrices for the fitted
> parameters so you get spurious uncertainties.)
> As far as frequency aliasing, there's not so much coming from the
> deconvolution, since our beam is so irregular. The actual observation
> samples image spatial frequencies rather badly; it's the price we pay
> for not having a filled aperture. So we're often simply missing
> information on spatial frequencies, most often the lowest ones
> (because there's a limit on how close you can put tracking dishes
> together without shadowing). But I don't think this is a deconvolution
> issue; in fact in situations where people are really pushing the
> limits of interferometry, like the millimeter-wave interferometric
> observations of the black hole at the center of our galaxy, you often
> give up on producing an image at all and fit (say) an emission model
> including the event horizon to the observed spatial frequencies
Thanks Anne, it's a good trick to know about.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User