[Numpy-discussion] shift for optimal superimposition of two 3D matrices according to correlation computed using FFT
Thu Apr 9 03:56:36 CDT 2009
> Does it work to use a cutoff of half the size of the input arrays in
> each dimension? This is equivalent to calculating both shifts (the
> positive and negative) and using whichever has a smaller absolute value.
no, unfortunately the cutoff is not half of the dimensions.
> Alternately, you could use numpy.roll to shift the data (one axis at a
> time). Since roll wraps around, you wouldn't need to bother figuring
> out which shift is "correct".
Maybe I do not understand correctly what you mean by this, but as far as
I get it this will not help me, given that I do not know the number of
positions I need to roll. I think the parameter <shift> to the roll
function would be the same value as the cutoff I need.
If you wanted to suggest that I can check some property for every shift
and hence could try every possible shift in each axis: this is
computationally not feasable for me. I need to run time consuming
operations to ensure the correctness and with a 200x200x200 matrix
(which is very roughly as large as it gets for me), so this would ruin
the speed benfits of using the FFT.
> Finally, you could not use FFTs but instead directly optimize a
> transformation between the two, using scipy.ndimage's affine
> transforms and scipy.optimize's numerical optimizers.
Sadly, this is no option for me, as this is a research project and I
need to use FFT.
One possible problem could be, that I am not sure about the benefits of
using the complex valued FFT (numpy.fft.fftn) compared to the real
valued version for my matrix of reals. At the moment I use
numpy.fft.rfftn, as the real part is the same as for the imaginary FFT.
Maybe (just guessing) some information about the cutoff is found in the
imaginary result part?!?
Does anybody have other suggestions?
More information about the Numpy-discussion