[Numpy-discussion] Efficient removal of duplicates
Mon Dec 15 18:53:02 CST 2008
On Mon, Dec 15, 2008 at 18:24, Daran Rife <firstname.lastname@example.org> wrote:
> How about a solution inspired by recipe 18.1 in the Python Cookbook,
> 2nd Ed:
> import numpy as np
> a = [(x0,y0), (x1,y1), ...]
> l = a.tolist()
> unique = [x for i, x in enumerate(l) if not i or x != b[l-1]]
> a_unique = np.asarray(unique)
> Performance of this approach should be highly scalable.
That basic idea is what unique1d() does; however, it uses numpy
primitives to keep the heavy lifting in C instead of Python.
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the Numpy-discussion