[SciPy-user] Reading in data as arrays, quickly and easily?

Eric Jonas jonas at cortical.mit.edu
Sat Jul 10 12:29:49 CDT 2004

> I assume you talk about Numeric, but in case you are open for numarray I use 
> numarray's memmap quite successfully on files even larger than 1 GB (Linux; I 
> think the effective limit for Windows might be lower ). It works for all 
> datatypes and for byteswapped data too. You can skip any amount of bytes by 
> having your mem-"slice" start at any offset you want. I actually  map the 
> first part into a record-array so that I can read the parts of the 
> "header"-information I'm interested in.

Well, I had been focusing on numarray, because everything I read seems
to suggest that it's the wave of the future, although at the same time
no one really seems to be using it much yet. May I ask how much larger
than 1 GB?  I'm dealing with between 1-20 GB EEG files, and for some
reason I don't thinK I'll be able to afford 64-bit hardware in the near
future : ) 

What I really want is to read in some fairly complex records, do endian
swapping, alignment, etc. all in C. I'm mostly interested in spectral
analysis, so the hope was that I'd be able to read in 32kB chunks at a
time for my periodograms. 

Also, I looked through the numarray docs again, and still couldn't find
anything about memory mapping -- any pointers? What command(s) have you
been using to pull this off? 


More information about the SciPy-user mailing list