[SciPy-dev] Hi Eric
eric at scipy.org
Tue Mar 26 16:57:47 CST 2002
> The stdc++ thing doesn't bother me as much as the large array problem. I
> use blitz() three times for enclosing the field update equations in FDTD.
> Typically the compile uses up 200Meg ram and 200Meg swap, so its pretty
> slow. But afterwards it runs like a bat out of hell. But when I upped the
> matrix size it suddenly needed more memory for the compile, and crashed.
> Without blitz() this project consumes 400MB, since it has many other
> matrices for the UPML.
> Now here at work, I have a 1.8Ghz P4 with 768MB ram. I tried out blitz()
> with MingW and I get stack overflows. Unlike gcc, it can't handle such
> large compiles.
As I said, this doesn't make sense to me. The same code is (or should be)
generated in both situations. If you use verbose=2 argument to blitz for
the 50x50x50 and 100x100x100 cases, you can see the name of the .cxx file
created for the two situations. Diff these and they should be the same.
If they aren't, please send me the difference. Also, a python snippet
showing the problem will be a big help.
More information about the Scipy-dev