[Numpy-discussion] custom allocation of numpy array
Trevor Clarke
trevor@notcows....
Tue Jul 7 17:50:27 CDT 2009
I'm embedding python and numpy in a C++ program and I want to share some
data owned by C++. I'm able to get the allocation/deallocation and memory
sharing working for a contiguous array. However, I have non-contiguous data
(random sub-array locations, not a fixed skip factor) and I may not have all
of the data in memory at any given time. The data is quite large (200gb is
not uncommon) so I need to load and access on-demand. I've already got a
paging system in-place which obtains a contiguous sub-array on-demand and
I'd like to use this with numpy. I've looked at the memmap array code and
that's not sufficient as the files are compressed so a simple memmap won't
work.
My initial thought is to tell numpy that the entire array is available and
intercept the access requests and load as necessary. I'd rather load larger
sub-arrays at a time instead of each individual access. Any thoughts?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20090707/bbb16543/attachment.html
More information about the NumPy-Discussion
mailing list