[Numpy-discussion] Huge arrays

Kim Hansen slaunger@gmail....
Thu Sep 10 02:17:04 CDT 2009

> On 9-Sep-09, at 4:48 AM, Francesc Alted wrote:
> > Yes, this later is supported in PyTables as long as the underlying
> > filesystem
> > supports files > 2 GB, which is very usual in modern operating
> > systems.
> I think the OP said he was on Win32, in which case it should be noted:
> FAT32 has its upper file size limit at 4GB (minus one byte), so
> storing both your arrays as one file on a FAT32 partition is a no-no.
> David

Strange, I work on Win32 systems, and there I have no problems storing data
files up to 600 GB (have not tried larger) in size stored on RAID0 disk
systems of 2x1TB, I can also open them and seek in them using Python. For
those data files, I use Pytables lzo compressed h5 files to create and
maintain an index to the large data file Besides some meta data describing
chunks of data, the index also conains a data position value stating what
the file position of the beginning of each data chunk (payload) is. The
index files I work with in h5 format are not larger than 1.5 GB though.

It all works very nice and it is very convenient

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20090910/74a3182a/attachment-0001.html 

More information about the NumPy-Discussion mailing list