[SciPy-user] handling of huge files for post-processing

Christoph Scheit Christoph.Scheit@lstm.uni-erlangen...
Mon Feb 25 08:05:16 CST 2008


Hello everybody,

I get from a Fortran-Code (CFD) binary files containing
the acoustic pressure at some distinct points.
The files has N "lines" which look like this:

TimeStep(int) DebugInfo (int) AcousticPressure(float)

and is binary. My problem is now, that the file can be
huge (> 100 MB) and that after several runs on a cluster
indeed not only one but 20 - 50 files of that size are
to be post-processed.

Since the CFD code runs parallel, I have to sum up
the results from different cpu's (cpu 1 calculates only
a fraction of the acoustic pressure of point p and time step
t, so that I have to sum over all cpu's)

Currently I'm reading all the data into a sqlite-table, than
I group the data, summing up over the processors and
then I'm writing out files containing the data of the single
points. This approach works for smaller files somehow,
but does not seem to be working for big files like described
above.

Do you have some ideas on this problem? Thank you very
much in advance,

Christoph


More information about the SciPy-user mailing list