[SciPy-User] Rotated, Anisotropic Gaussian Filtering (Kernel Density Estimation)
Daπid
davidmenhur@gmail....
Fri Oct 19 09:34:39 CDT 2012
If you are going to apply different filters to the same image, it may
be faster to switch to the Fourier transform. In this case, the
result is the IFT of the FT of your data multiplied by the FT of your
kernel.
Doing all the FT may be expensive, but it can be useful if you are
reusing the data, and Scipy is linked to very optimized FFT libraries.
On Thu, Oct 18, 2012 at 6:20 PM, Patrick Marsh <patrick.marsh@noaa.gov> wrote:
>> Patrick,
>>
>> I pulled down the gist code to run the cython annotate on it, and
>> found that there were some declarations missing. I added these at the
>> top of the file:
>>
>> ctypedef np.float64_t DTYPE64_t
>> DTYPE64 = np.float
>> from libc.math cimport exp, sin, cos
>>
>> And it compiles and looks OK to me; there isn't anything obvious that
>> would make it slow. However, depending on how you defined exp, sin,
>> cos, in the file you are actually running, if you are linking those
>> back to the numpy versions instead of the C versions, this code would
>> be pretty slow.
>
>
> Hi, Aronne,
>
> Thanks for the great response. I really appreciate You did catch a couple of
> declarations I missed when posting the gist. (I created the gist from a
> module I have, and forgot to copy all of the header stuff.) I've fixed that
> now, but essentially I declare exp, cos, sin, and fabs as:
>
>
>
> cdef extern from 'math.h':
>
> float exp(float x)
>
> float cos(float x)
>
> float sin(float x)
>
> float fabs(float x)
>
>
> Is the way you define the math functions better/faster? My (limited)
> thinking is that your method and my method achieve the same thing, but my
> understanding of Cython is simply from hacking around with it and could
> easily be in err.
>
>
>
>>
>> Otherwise, just after skimming the cython code, it looks like the
>> gaussian kernel (partweight in the cython code) is fixed, so this is
>> really just a convolution. If you compute partweight once, in python,
>> then you can just use the convolution function in scipy. This should
>> be as fast as any simple cython code, I'd think, and it is a lot
>> simpler.
>
>
>
> When I first wrote this code (a couple of years ago) I didn't know about
> convolutions. However, as I'm learning about them, I see that what I'm doing
> really is a convolution. My attempts to use the convolution function in
> scipy is slower than my Cython code. Maybe I'm doing something wrong? The
> line below is how I'm calling the convolution:
>
> smooth_hist = ndimage.filters.convolve(data, weights, mode='constant',
> cval=0.0, origin=0)
>
>
>
>> If you try that, is it enough? If not, can you be more specific as to
>> what cases you have where the performance is bad? Specifically: what
>> size is the data array? what size is the kernel? what number of points
>> are non zero in the data array?
>
>
>
> Currently I'm using this on a grid that's approxiately 800x600 with a kernel
> of about half that (Gaussian function with sigma of ~40km). This grid is
> essentially the eastern half of the United States at a grid spacing of 4km.
> On this grid, the Cython code is plenty fast. However, as I move toward
> dealing with finer meshes, my grid goes from 800x600 to closer to 5000x3000.
> Again, when dealing with binary, discrete data, the Cython routine is fairly
> quick. However, as the data become closer to continuous fields (say a
> temperature field instead of tornado tracks), the Cython code's performance
> decreases fairly quickly. When compared to using Gaussian filters (which I
> understand to be convolutions), the Cython code is substantially slower. The
> Cython code is still significantly slower than my workflow of "rotate, use
> gaussian filter, rotate back". The problem with the rotate workflow is that
> I'm uncomfortable with losing data in the rotations.
>
> (To see one application of what I'm doing, here's a paper in Weather and
> Forecasting describing what I'm doing:
> http://www.patricktmarsh.com/research/pubs/refereed/marshetal2012_precip.pdf)
>
> I'm currently proceeding with the Cython code as it's sufficient for what
> I'm doing right now. However, I was thinking of down the road. I wasn't sure
> where Scipy's Gaussian filters were getting such a bigger speed up than my
> Cython code.
>
> Thanks again
> Patrick
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
More information about the SciPy-User
mailing list