[Numpy-discussion] Vectorizing array updates
Wed Apr 29 17:09:27 CDT 2009
Robert Kern wrote:
> On Wed, Apr 29, 2009 at 16:19, Dan Goodman <firstname.lastname@example.org> wrote:
>> Robert Kern wrote:
>>> On Wed, Apr 29, 2009 at 08:03, Daniel Yarlett <email@example.com> wrote:
>>>> As you can see, Current is different in the two cases. Any ideas how I
>>>> can recreate the behavior of the iterative process in a more numpy-
>>>> friendly, vectorized (and hopefully quicker) way?
>>> Use bincount().
>> Neat. Is there a memory efficient way of doing it if the indices are
>> very large but there aren't many of them? e.g. if the indices were
>> I=[10000, 20000] then bincount would create a gigantic array of 20000
>> elements for just two addition operations!
> indices -= indices.min()
Ah OK, but bincount is still going to make an array of 10000 elements in
I came up with this trick, but I wonder whether it's overkill:
Supposing you want to do y+=x[I] where x is big and indices in I are
large but sparse (although potentially including repeats).
Well it seems to work, but surely there must be a nicer way?
More information about the Numpy-discussion