[Numpy-discussion] fast access and normalizing of ndarray slices

srean srean.list@gmail....
Sun Jun 3 16:44:46 CDT 2012


Hi Wolfgang,

  I think you are looking for reduceat( ), in particular add.reduceat()

-- srean

On Thu, May 31, 2012 at 12:36 AM, Wolfgang Kerzendorf
<wkerzendorf@gmail.com> wrote:
> Dear all,
>
> I have an ndarray which consists of many arrays stacked behind each other (only conceptually, in truth it's a normal 1d float64 array).
> I have a second array which tells me the start of the individual data sets in the 1d float64 array and another one which tells me the length.
> Example:
>
> data_array = (conceptually) [[1,2], [1,2,3,4], [1,2,3]] = in reality [1,2,1,2,3,4,1,2,3, dtype=float64]
> start_pointer = [0, 2, 6]
> length_data = [2, 4, 3]
>
> I now want to normalize each of the individual data sets. I wrote a simple for loop over the start_pointer and length data grabbed the data and normalized it and wrote it back to the big array. That's slow. Is there an elegant numpy way to do that? Do I have to go the cython way?


More information about the NumPy-Discussion mailing list