[SciPy-User] Maximum file size for .npz format?

Andrew Collette andrew.collette@gmail....
Fri Mar 12 19:30:25 CST 2010

> I use h5py. I think it is great. It gives you a dictionary-like
> interface to your archive. Here's a quick example:

Just to add, h5py (and PyTables also) allows you to read/write subsets
of your data:

>>> f = h5py.File('foo.hdf5','w')
>>> f['a'] = np.random.rand(1000,1000)
>>> subset = f['a'][200:300, 400:500:2] # only reads this slice from the file

H5py also supports transparent compression on a per-dataset basis,
with no limits on the size of the datasets or files.  Slicing is still
efficient for compressed datasets since HDF5 supports a chunked
storage model.  There's a general introduction to h5py here:


Andrew Collette

More information about the SciPy-User mailing list