[SciPy-user] Finding Neighboors
Wed Aug 1 10:08:59 CDT 2007
No, these are not the same.
For multiple variable (multidimensional) data points
the slice search uses presorting on each dimension's
data points (hence the need for static data) to take
slices around the "center Point" along each dimension
thereby narrowing down the neighbors rather rapidly.
There are no tree structures in slice searching.
Setting it up is very simple compared to k-d trees.
However, the presorting makes it inefficient for
Please see the original article for more information
and comparisons to k-d tree searching including timing
2007/8/1, Lou Pecora <firstname.lastname@example.org>:
If you are using a data set that is static, i.e.
are not adding or deleting data points, then you
should check out nearest neighbor searches using
slices along various directions. It's *much*
to code than k-d trees and is about as efficient.
Nene and Nayer, IEEE TRANSACTIONS ON PATTERN
AND MACHINE INTELLIGENCE, VOL. 19, NO. 9,
1997, pg. 989
If your data is changing a lot as you are doing
searches, then as I understand it, k-d trees are
better to use since you can delete and insert
Isn't that just what kd-trees do ? If you take several
slices around a point, you must know which points
belong to each slice, thus making a tree anyway, no ?
-- Lou Pecora, my views are my own.
Great spirits have always encountered violent opposition from mediocre minds.
Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out.
More information about the SciPy-user