[Numpy-discussion] Managing Rolling Data
Wed Feb 21 13:13:26 CST 2007
On 2/21/07, Alexander Michael <firstname.lastname@example.org> wrote:
> ... T is to large to fit in memory, so I need to
> load up H, perform my calculations, pop the oldest N x P slice and
> push the newest N x P slice into the data cube. What's the best way to
> do this that will maintain fast computations along the one-dimensional
> slices over N and H? Is there a commonly accepted idiom?
Would loading your data via memmap, then slicing it, do your job
(using numpy.memmap)? I work on 12 GB files with 4 GB of memory, but
it is transparent to me since the OS takes care of moving data in and
out of memory. May not be the fastest solution possible, but for me it
is a case where dev time is more significant than run time.
More information about the Numpy-discussion