[Numpy-discussion] nd_image.affine_transform edge effects
Thu Mar 15 17:01:55 CDT 2007
Thanks for the suggestions!
> Is this related to
> in any way?
As far as I can see, the problems look different, but thanks for
the example of how to document this. I did confirm that your example
exhibits the same behaviour under numarray, in case that is useful.
> Code snippets to illustrate the problem would be welcome.
OK. I have had a go at producing a code snippet. I apologize that
this is based on numarray rather than numpy, since I'm using STScI
Python, but I think it should be very easy to convert if you have
What I am doing is to transform overlapping input images onto a
common, larger grid and co-adding them. Although I'm using affine_
transform on 3D data from FITS images, the issue can be illustrated
using a simple 1D translation of a single 2D test array. The input
values are just [4., 3., 2., 1.] in each row. With a translation of
-0.1, the values should therefore be something like
[X, 3.1, 2.1, 1.1, X, X], where the Xs represent points outside the
original data range. What I actually get, however, is roughly
[X, 3.1, 2.1, 1.0, 1.9, X]. The 5th value of 1.9 contaminates the
co-added data in the final output array. Now I'm looking at this
element-by-element, I suppose the bad value of 1.9 is just a result
of extrapolating in order to preserve the original number of data
points, isn't it? Sorry I wasn't clear on that in my original post
-- but surely a blank value (as specified by cval) would be better?
I suppose I could work around this by blanking out the extrapolated
column after doing the affine_transform. I could calculate which is
the column to blank out based on the sense of the offset and the
input array dimensions. It seems pretty messy and inefficient though.
Another idea is to split the translation into integer and fractional
parts, keep the input and output array dimensions the same initially
and then copy the output into a larger array with integer offsets.
That is messy to keep track of though. Maybe a parameter could
instead be added to affine_transform that tells it to shrink the
number of elements instead of extrapolating? I'd be a bit out of my
depth trying to implement that though, even if the authors agree...
(maybe in a few months though).
Can anyone comment on whether this problem should be considered a
bug, or whether it's intended behaviour that I should work around?
The code snippet follows below. Thanks for your patience with
someone who isn't accustomed to posting questions like this
import numarray as N
import numarray.nd_image as ndi
# Create a 2D test pattern:
I = N.zeros((2,4),N.Float32)
I[:,:] = N.arange(4.0, 0.0, -1.0)
# Transformation parameters for a simple translation in 1D:
trmatrix = N.array([[1,0],[0,1]])
troffset = (0.0, -0.1)
# Apply the offset to the test pattern:
I_off1 = ndi.affine_transform(I, trmatrix, troffset, order=3, mode='constant',
I_off2 = ndi.affine_transform(I, trmatrix, troffset, order=3, mode='constant',
cval=-1.0, output_shape=(2,6), prefilter=False)
# Compare the data before and after interpolation:
More information about the Numpy-discussion