[Numpy-discussion] searchsorted() and memory cache
Tue May 13 21:00:17 CDT 2008
Nathan Bell wrote:
> On Tue, May 13, 2008 at 6:59 PM, Andrew Straw <email@example.com> wrote:
>> easier and still blazingly fast compared to the binary search
>> implemented in searchsorted() given today's cached memory architectures.
> Andrew, I looked at your code and I don't quite understand something.
> Why are you looking up single values?
The Python overhead was nothing compared to the speed problems I was
having... Now I'm quite sure that some optimization could go a little
further. Nevertheless, for my motivating use case, it wouldn't be
trivial to vectorize this, and a "little further" in this case is too
little to justify the investment of my time at the moment.
More information about the Numpy-discussion