[Numpy-discussion] Nieghbourhood functions

Charles G Waldman cgw at alum.mit.edu
Sun Apr 8 22:36:42 CDT 2001


 > Robert.Denham at dnr.qld.gov.au wrote:
 > > 
 > > I am looking for efficient ways to code neighbourhood functions.  For
 > > example a neighbourhod add for an element in an array will simply be the sum
 > > of the neighbours:
 > > 
 > > 1 0 2
 > > 3 x 3   , then x becomes 7 (first order neighbour), 11 (2nd order) etc.
 > > 1 1 0
 > > 
 > > I would be interested in efficient ways of doing this for a whole array,
 > > something like  a_nsum = neighbour_sum(a, order=1), where each element in
 > > a_nsum is the sum of the corresponding element in a.

As Scott Ransom already mentioned, these "neighborhood functions" are
usually referred to as convolutions and are widely used in
signal/image processing.  For large convolution kernels, the most
efficient implementation is to use Fourier methods, since a
convolution in the spatial domain maps to a multiplication in the
Fourier domain.  However for small kernels this is inefficient because
the time spent doing the forward and inverse FFT's dwarfs the time
that it would take to just do the convolution.

There is a convolve function built into Numeric but it only is
implemented for 1-d arrays.  It would be nice if this were
generalized... when somebody gets the time.

In the meanwhile - here are two more comments which may help.
If your kernel is separable (i.e. a rank-one matrix, or equivalently,
the outer product of a column and a row vector) then the 2-d
convolution is equivalent to doing 2 1-d convolutions in sequence.
For your "first order neighborhood function" the kernel is 

0 1 0
1 0 1
0 1 0

which is not separable.  But for the "second order" case, the kernel
is

1 1 1
1 0 1
1 1 1

which is *almost* separable if you made that middle 0 into a 1.  But
if you were to convolve with the kernel 

1 1 1
1 1 1
1 1 1

then subtract off the value of the original array, then you'd have
what you were looking for.  And convolving with this kernel is
essentially the same as convolving with the 1-d kernel [1 1 1], then
transposing, then convolving with [1 1 1] again, and transposing a
second time.  This scales up to larger separable kernels.  I'm not
sure how efficient this winds up being - transposing large arrays can
be slow, if the arrays are too large to sit in physical memory - the
access pattern of a transpose is hell on virtual memory.

Finally, I would suggest that you look at the file Demo/life.py in the
Numeric Python distribution - in particular the functions shift_up,
shift_down, shift_left, and shift_right.  Using these you could write:

def first_order(arr):
    return shift_left(arr) + shift_right(arr) + shift_up(arr) +
	   shift_down(arr)

which is nice and terse and understandable, but unfortunately not very
memory-efficient.

If speed and memory usage are both critical, your best bet is probably
to write the functions in C and use SWIG to create Python bindings.


Charles G. Waldman
Object Environments
cgw at objenv.com




More information about the Numpy-discussion mailing list