[SciPy-User] 2D slice of transformed data

gary ruben gruben@bigpond.net...
Wed Mar 23 18:00:54 CDT 2011


I'm not really sure of the best approach here, but you might consider
downsampling your images to speed things up. Then if you can get valid
parameters for the affine transfomation, apply these to the full
volume. Take a look at SIFT registration. scikits-image can read sift
features generated by an external program
http://stefanv.github.com/scikits.image/api/scikits.image.io.html#load-sift
This may be able to register your images. I have a vague memory that
there may also be an approach using the radon transform that might
work for your case.

Gary R.

On Thu, Mar 24, 2011 at 9:00 AM, Chris Weisiger <cweisiger@msg.ucsf.edu> wrote:
> In preface, I'm not remotely an expert at array manipulation here. I'm an
> experienced programmer, but not an experienced *scientific* programmer. I'm
> sure what I want to do is possible, and I'm pretty certain it's even
> possible to do efficiently, but figuring out the actual implementation is
> giving me fits.
>
> I have two four-dimensional arrays of data: time, Z, Y, X. These represent
> microscopy data taken of the same sample with two different cameras. Their
> views don't quite match up if you overlay them, so we have a
> three-dimensional transform to align one array with the other. That
> transformation consists of X, Y, and Z translations (shifts), rotation about
> the Z axis, and equal scaling in X and Y -- thus, the transformation has 5
> parameters. I can perform the transformation on the data without difficulty
> with ndimage.affine_transform, but because we typically have hundreds of
> millions of pixels in one array, it takes a moderately long time. A
> representative array would be 30x50x512x512 or thereabouts.
>
> I'm writing a program to allow users to adjust the transformation and see
> how well-aligned the data looks from several perspectives. In addition to
> the traditional XY view, we also want to show XZ and YZ views, as well as
> kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D
> slices of the transformed data in a timely fashion. These slices are always
> perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = 20,
> or a TZ slice passing through X = 256, Y = 256), never diagonal. It seems
> like the fast way to do this would be to take each pixel in the desired
> slice, apply the reverse transform, and figure out where in the original
> data it came from. But I'm having trouble figuring out how to efficiently do
> this.
>
> I could construct a 3D array with shape (length of axis 1), (length of axis
> 2), (4), such that each position in the array is a 4-tuple of the
> coordinates of the pixel in the desired slice. For example, if doing a YX
> slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], [10,
> 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. Then
> perhaps there'd be some way to efficiently apply the inverse transform to
> each coordinate tuple, then using ndimage.map_coordinates to turn those into
> pixel data. But I haven't managed to figure that out yet.
>
> By any chance is this already solved? If not, any suggestions / assistance
> would be wonderful.
>
> -Chris
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
>


More information about the SciPy-User mailing list