[Numpy-discussion] Efficient removal of duplicates
Mon Dec 15 18:24:05 CST 2008
How about a solution inspired by recipe 18.1 in the Python Cookbook,
import numpy as np
a = [(x0,y0), (x1,y1), ...]
l = a.tolist()
unique = [x for i, x in enumerate(l) if not i or x != b[l-1]]
a_unique = np.asarray(unique)
Performance of this approach should be highly scalable.
I the following problem: I have a relatively long array of points
[(x0,y0), (x1,y1), ...]. Apparently, I have some duplicate entries,
prevents the Delaunay triangulation algorithm from completing its task.
Question, is there an efficent way, of getting rid of the duplicate
All I can think of involves loops.
Thanks and regards,
More information about the Numpy-discussion