[SciPy-user] read/write compressed files
Thu Jun 21 02:20:20 CDT 2007
David Warde-Farley wrote:
> On 20-Jun-07, at 4:18 PM, Dominik Szczerba wrote:
>> I got it (partially) working, but am not sure about optimality. In
>> particular, will fromstring copy memory into the array or
>> decompress in
>> place? I think the former (how else would it know the size, and tell()
>> will be slow), but please correct me if I am wrong.
> I would almost certainly bet it would do a copy. Did you try using
Is there a way to avoid it if I know the size of the unpacked sequence a
> Anne's suggestion of scipy.read_array
> with your 'fh' object?
Yes I did and reported it back to the list (it works only for ascii data)
> Also, somebody correct me if I'm wrong, but I don't think modifying
> the 'shape' property directly is the
> recommended way to do it, I think you should be using ps.resize().
Thanks for a warning, but actually, I was able to do things with so
formed array (matplotlib plots, usual stuff like sqrt and powers etc.)
Thanks a lot,
>> import gzip
>> fh = gzip.GzipFile("test.dat.gz", 'rb');
>> #ps = zeros(256*256) - will it help?
>> ps = fromstring(fh.read(), 'd')
>> ps.shape = (256,256)
>> fp = open('test.dat', 'wb')
>> io.numpyio.fwrite(fp, ps.size, ps)
>> - Dominik
>> Dominik Szczerba wrote:
>>> That works very well for ascii files, but I failed to figure out
>>> binary data...
>>> Thanks for any hints,
>>> - Dominik
>>> Anne Archibald wrote:
>>>> On 20/06/07, Dominik Szczerba <firstname.lastname@example.org> wrote:
>>>>> Yes, I know it, but it does not return a scipy array, does it?
>>>>> Can I achieve it without copying memory? (I have huge arrays to
>>>> If the bz2 module will provide a file-like object, scipy.read_array
>>>> can read from that.
>>>> SciPy-user mailing list
>> Dominik Szczerba, Ph.D.
>> Computer Vision Lab CH-8092 Zurich
>> SciPy-user mailing list
> SciPy-user mailing list
Dominik Szczerba, Ph.D.
Computer Vision Lab CH-8092 Zurich
More information about the SciPy-user