[SciPy-user] read/write compressed files
Wed Jun 20 18:17:47 CDT 2007
On 20-Jun-07, at 4:18 PM, Dominik Szczerba wrote:
> I got it (partially) working, but am not sure about optimality. In
> particular, will fromstring copy memory into the array or
> decompress in
> place? I think the former (how else would it know the size, and tell()
> will be slow), but please correct me if I am wrong.
I would almost certainly bet it would do a copy. Did you try using
Anne's suggestion of scipy.read_array
with your 'fh' object?
Also, somebody correct me if I'm wrong, but I don't think modifying
the 'shape' property directly is the
recommended way to do it, I think you should be using ps.resize().
> import gzip
> fh = gzip.GzipFile("test.dat.gz", 'rb');
> #ps = zeros(256*256) - will it help?
> ps = fromstring(fh.read(), 'd')
> ps.shape = (256,256)
> fp = open('test.dat', 'wb')
> io.numpyio.fwrite(fp, ps.size, ps)
> - Dominik
> Dominik Szczerba wrote:
>> That works very well for ascii files, but I failed to figure out
>> binary data...
>> Thanks for any hints,
>> - Dominik
>> Anne Archibald wrote:
>>> On 20/06/07, Dominik Szczerba <email@example.com> wrote:
>>>> Yes, I know it, but it does not return a scipy array, does it?
>>>> Can I achieve it without copying memory? (I have huge arrays to
>>> If the bz2 module will provide a file-like object, scipy.read_array
>>> can read from that.
>>> SciPy-user mailing list
> Dominik Szczerba, Ph.D.
> Computer Vision Lab CH-8092 Zurich
> SciPy-user mailing list
More information about the SciPy-user