[SciPy-dev] Linalg2 benchmarks
jochen at jochen-kuepper.de
Sun Apr 7 00:17:20 CST 2002
>> Well, se above. Going from 300 x 100x100 to 4 x 500x500 scipy takes
>> more or the same time, whereas numpy takes less.
pearu> Yes, because in the later case most of the time is spent in
pearu> ATLAS routines and time spent in interfaces is very small,
pearu> after all there is only 4 calls. But in the former case (300 x
pearu> 100x100), ATLAS routines finish more quickly and since there
pearu> are lots of calls (300), then the time spent in interfaces
pearu> becomes noticable.
All this is perfectly clear and I do see your improvements. What this
boils down to is that scipy is good when you have a huge number of
evaluations for small matrices. (If the number of evals is small it
doesn't matter, and if the matrices are big Numeric is as good --
pearu> But you don't believe me ;-)
Yes I do. Even before I got your mail:)) I meant to say: "I believe
you, but there is not enough data to prove it."
pearu> scipy interfaces to ATLAS routines are X times faster than the
pearu> corresponding interfaces in Numeric.
That does make perfect sense. And a factor > 2 is actually a lot
considering the "small" amount of work that has to be done here.
"3--5" you say... phiuuuh
pearu> Note also that the n's used in these tests are relatively
pearu> small. If n is really large, then I would expect also scipy to
pearu> perform better than Numeric because the interfaces in scipy are
pearu> also optimized to minimize the memory usage.
Ah, here we go. That is what would help *me*. I'll check it.
pearu> So, I don't find these testing results strange as you
Well, knowing what Travis posted I must say what I find strange is t
he compared Numeric's lapack_lite with ATLAS where it is so easy to
use a machine optimized LAPACK/BLAS with Numeric. And btw. ATLAS
isn't always the best choice here -- one reason why I don't like this
tight binding to ATLAS too much.
pearu> I agree. However this comparison is still generally useful: it
pearu> gives a motivation for people to build numpy with optimized
pearu> LAPACK/BLAS libraries.
Yep, but then you have to post it on numpy-discussion. I assume
everybody on the scipy list who has gone through the trouble of
installing scipy (I know, it got a lot better again lately) has
installed numeric with lapack/blas use.
pearu> Yes, but curiosly enough with non-contiguous input the
pearu> calculation is systematically faster or with the same speed,
pearu> but rarely slower.
Hmm, looking at the test I see that non-contiguous means "inverted".
Could this and the copying actually help caching?? Thewre should
probably some really non-contiguous (i.e. abs(stride) != 1) data in
something related: How much work would it be to get f2py/LinAlg to
work with numarray?
There is a numpy_compat now that supports almost all of Numeric, so
this would be one way. But in the long run one would like to have a
native implementation of linalg, of course...
Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de
Liberté, Égalité, Fraternité GnuPG key: 44BCCD8E
Sex, drugs and rock-n-roll
More information about the Scipy-dev