[SciPy-user] ndimage.convolve and NaN...
Mon Jul 2 13:33:15 CDT 2007
On 02/07/07, fred <firstname.lastname@example.org> wrote:
> I want to apply a convolution on a 2D array data, which has a few NaN.
> Works quite fine, but "holes" (ie NaN) are widened by the convolution.
This is actually unavoidable. Think of it this way: NaN means "I have
no idea what value goes here". If you replaced them by 10^20, the
convolution would spread this value over the nearby pixels, so if you
don't know what value goes in the NaN, you don't know what value goes
in the nearby pixels either.
That said, some implementations of convolution widens the NaNs more
than is mathematically necessary - if you pad the convolution kernel
with zeros, that shouldn't change the result but will (because 0*NaN
is NaN). The extreme example of this is when you do the convolution
with a single FFT, where your whole array turns into NaNs.
> By the way, as edges are processed correctly, I wonder if it could
> possible to process the holes
> like the edges, ie without widening them.
There are technical solutions to this problem, but do think about
definitions: mathematically, what do you want to happen? For
definiteness, let's suppose your kernel is a Gaussian blur, and you've
got a sizable hole full of NaNs. You can treat them as zeros, which
will avoid their values being added to the neighbours, but their
neighbours will become darker: the blur will distribute their value
over nearby pixels, but the hole will not distribute any value into
them. If you want the pixels that are unaffected by this darkening,
well, that's pretty much exactly the ones that are not turned into
NaNs by the current procedure.
More information about the SciPy-user