[Numpy-discussion] neighborhood iterator speed
Mon Oct 24 10:34:57 CDT 2011
My use case is a biliterl filter: It is a convolution-like filter used mainly in image-processing, which may use relatively large convolution kernels (in the order of 50x50). I would like to run the inner loop (iteration over the neighbourhood) with a direct indexing (in a cython code) rather then using the slow iterator, in order to save time.
A separate issue is the new cython's parallel loop that raises the need for GIL-free numpy iterators (I might be wrong though). Anyway, it is not urgent for me.
From: firstname.lastname@example.org [mailto:email@example.com] On Behalf Of David Cournapeau
Sent: Monday, October 24, 2011 4:04 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] neighborhood iterator speed
On Mon, Oct 24, 2011 at 1:23 PM, Nadav Horesh <firstname.lastname@example.org> wrote:
> * I'll try to implement the 2D iterator as far as far as my programming expertise goes. It might take few days.
I am pretty sure the code is in the history, if you are patient enough
to look for it in git history. I can't remember why I removed it
(maybe because it was not faster ?).
> * There is a risk in providing a buffer pointer, and for my (and probably most) use cases it is better for the iterator constructor to provide it. I was thinking about the possibility to give the iterator a shared memory pointer, to open a door for multiprocessing. Maybe it is better instead to provide a contiguous ndarray object to enable a sanity check.
One could ask for an optional buffer (if NULL -> auto-allocation). But
I would need a more detailed explanation about what you are trying to
do to warrant changing the API here.
NumPy-Discussion mailing list
__________ Information from ESET NOD32 Antivirus, version of virus signature database 4628 (20091122) __________
The message was checked by ESET NOD32 Antivirus.
More information about the NumPy-Discussion