[Numpy-discussion] huge array calculation speed
Fri Jul 11 12:04:37 CDT 2008
If your positions are static (I'm not clear on that from your message), then you might want to check the technique of "slice searching". It only requires one sort of the data for each dimension initially, then uses a simple, but clever look up to find neighbors within some epsilon of a chosen point. Speeds appear to be about equal to k-d trees. Programming is vastly simpler than k-d trees, however.
 "A Simple Algorithm for Nearest Neighbor Search in High Dimensions," Sameer A. Nene and Shree K. Nayar, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 19 (9), 989 (1997).
-- Lou Pecora, my views are my own.
--- On Thu, 7/10/08, Dan Lussier <email@example.com> wrote:
> From: Dan Lussier <firstname.lastname@example.org>
> Subject: [Numpy-discussion] huge array calculation speed
> To: email@example.com
> Date: Thursday, July 10, 2008, 12:38 PM
> I am relatively new to numpy and am having trouble with the
> speed of
> a specific array based calculation that I'm trying to
> What I'm trying to do is to calculate the total total
> energy and coordination number of each atom within a
> relatively large
> simulation. Each atom is at a position (x,y,z) given by a
> row in a
> large array (approximately 1e6 by 3) and presently I have
> information about its nearest neighbours so each its
> position must be
> checked against all others before cutting the list down
> prior to
> calculating the energy.
More information about the Numpy-discussion