[Numpy-discussion] Efficient removal of duplicates
Tue Dec 16 03:09:33 CST 2008
that works like a charm!
On Tue, Dec 16, 2008, Daran Rife <firstname.lastname@example.org> said:
> Whoops! A hasty cut-and-paste from my IDLE session.
> This should read:
> import numpy as np
> a = [(x0,y0), (x1,y1), ...] # A numpy array, but could be a list
> l = a.tolist()
> unique = [x for i, x in enumerate(l) if not i or x != l[i-1]] # <----
> a_unique = np.asarray(unique)
> On Dec 15, 2008, at 5:24 PM, Daran Rife wrote:
>> How about a solution inspired by recipe 18.1 in the Python Cookbook,
>> 2nd Ed:
>> import numpy as np
>> a = [(x0,y0), (x1,y1), ...]
>> l = a.tolist()
>> unique = [x for i, x in enumerate(l) if not i or x != b[l-1]]
>> a_unique = np.asarray(unique)
>> Performance of this approach should be highly scalable.
>> I the following problem: I have a relatively long array of points
>> [(x0,y0), (x1,y1), ...]. Apparently, I have some duplicate entries,
>> prevents the Delaunay triangulation algorithm from completing its
>> Question, is there an efficent way, of getting rid of the duplicate
>> All I can think of involves loops.
>> Thanks and regards,
> Numpy-discussion mailing list
More information about the Numpy-discussion