[SciPy-User] Rotated, Anisotropic Gaussian Filtering (Kernel Density Estimation)
Thu Oct 18 15:38:34 CDT 2012
github seems to be down & I couldn't look at your code. Following on from what Aronne has said, however, if you can get the desired results by rotating your data and applying ndimage.gaussian_filter, is there anything stopping you just generating an anisotropic rotated kernel and then using ndimage.convolve2d?
From: Aronne Merrelli <firstname.lastname@example.org>
To: SciPy Users List <email@example.com>
Sent: Friday, 19 October 2012 1:38 AM
Subject: Re: [SciPy-User] Rotated, Anisotropic Gaussian Filtering (Kernel Density Estimation)
On Tue, Oct 16, 2012 at 1:02 PM, Patrick Marsh <firstname.lastname@example.org> wrote:
> I know that people on this list are way smarter than I, so hopefully someone
> can help me out here.
> I have a gridded dataset of 1s and 0s with which I'm needing to apply a
> rotated, anisotropic Gaussian filter to achieve a kernel density estimate.
> Currently I have some Cython code that I wrote to do this. The code for this
> can be found here: https://gist.github.com/3900591. In essence, I search
> through the grid for a value of 1. When a 1 is found, a weighting function
> is applied to all the surrounding grid points. Their are rotation matrices
> at work throughout the code to handle the fact the axes of the anisotropic
> Gaussian kernel can be off-cartesian axes. The code works, and is reasonably
> efficient for small domains or large grid spacing. However, as grid spacing
> decreases, the performance takes a substantial hit.
I pulled down the gist code to run the cython annotate on it, and
found that there were some declarations missing. I added these at the
top of the file:
ctypedef np.float64_t DTYPE64_t
DTYPE64 = np.float
from libc.math cimport exp, sin, cos
And it compiles and looks OK to me; there isn't anything obvious that
would make it slow. However, depending on how you defined exp, sin,
cos, in the file you are actually running, if you are linking those
back to the numpy versions instead of the C versions, this code would
be pretty slow.
Otherwise, just after skimming the cython code, it looks like the
gaussian kernel (partweight in the cython code) is fixed, so this is
really just a convolution. If you compute partweight once, in python,
then you can just use the convolution function in scipy. This should
be as fast as any simple cython code, I'd think, and it is a lot
If you try that, is it enough? If not, can you be more specific as to
what cases you have where the performance is bad? Specifically: what
size is the data array? what size is the kernel? what number of points
are non zero in the data array?
SciPy-User mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SciPy-User