Sun May 4 10:05:27 CDT 2008
On Sun, May 4, 2008 at 7:40 AM, Timothy Hochberg <email@example.com> wrote:
> If you don't need the old array after the cut, I think that you could use
> the input array as the output array and then take a slice, saving a
> temporary and one-quarter of your assignments (on average). Something like.
> def destructive_cut(x, i): # Untested
> out = x[:-1,:-1]
> out[:i,i:] = x[:i,i+1:]
> out[i:,:i] = x[i+1:,:i]
> out[i:,i:] = x[i+1:,i+1:]
> return out
That's a nice improvement.
>> timeit mycut2(x, 6)
100 loops, best of 3: 1.54 ms per loop
>> timeit destructive_cut(x, 6)
1000 loops, best of 3: 657 µs per loop
Why is it so slow to copy data or create an empty array? Where is the
bottleneck? In this case I would guess, but I know nothing about it,
that the array is already in the cpu's cache, so it is fast to read.
But to make a copy it needs to write to ram and that is slow?
More information about the Numpy-discussion