[SciPy-dev] automatic test script for scipy

eric eric at scipy.org
Fri Apr 12 12:27:06 CDT 2002

Hello group,

As I've tried to get testing of the build done for the upcoming release, I found
it arduous to test on even the few platforms I have here.  So, I took a couple
of days to put together a automatic testing facility.  It is still pretty raw,
but, with polishing, it should clean up nicely.  The scripts automatically test
SciPy builds against different versions of Python, Numeric, f2py2e, atlas
(sorta).  Adding things like fftw, etc. should also be fairly easy.  Currently,
the script runs and then mails its output to scipy-testlog at scipy.org.  This is a
new mailing list that you can subscribe to it here:


I doubt you want to though.  The output is probably only interesting to a few of
us.  Also, the mail messages are currently very large (700K), and there are
likely to be many (10-50) from nightly cron jobs running on test machines.  Its
probably better to look over the archives.  Here is the one for April:


As for the release, it is passed mid-week, it still isn't here.  The tests are
mostly passing now, so it is just polishing, release docs, etc. that are needed
(I think?).  Still,  I'm done guessing when the release will happen. :-|
However, I hope to make some beta tar-balls and windows exe distributions today
or this weekend for others to test.

For those curious, Numeric 19.0 and before fail to work with SciPy.  20.0 works,
but fails 3 errors.  20.3 and beyond all pass.

As for the script, it is working on Linux now, and I hope to get it going on
cygwin and Windows (with some modification) soon.  Here is a standard call:

    full_scipy_build(build_dir = '/tmp/scipy_test',
                     test_level = 10,
                     python_version  = '2.1.3',
                     numeric_version = '18.4.1',
                     f2py_version    = '2.13.175-1250',
                     atlas_version   = '3.3.14',
                     scipy_version   = 'snapshot')

This will look in a local repository of tarballs (hard coded for our network
right now) to see if python-2.2.1.tgz exist there.  If not, it will download it
from the python.org site, cache it in the local repository, unpack it, and build
it.  It follows this same procedure for every package (except atlas which I'll
discuss in a minute).  The scipysnapshot is a nightly CVS snapshot from scipy
ftp site.  The builds of numeric, f2py, and scipy are done with the just-built
version of python.

After building everything, it runs a scipy's test suite at level=10.  Both the
build and test reports are sent in the email message.

Enthought will eventually cover Windows, Sun, Irix, RH Linux, and Mac OS X
here -- all with gcc (or MSVC on windows).  Right now, it only runs on our Linux
server.  Feel free to use the script and email reports from your architecture.
You'll have to be willing to poke through some code and change settings though
(mostly at the top of the script) to get it working on your local machine.  It
takes 15 minutes to build/test a single group of settings on our relatively fast
Linux server.

Things that need work:

The reports are way to verbose.  We need to set things up so that when Python
builds successfully, it just reports success instead of the entire build
process.  This would cut message size significantly.  I think this is very

Clean up reports.  Right now, they are very raw.  They should probably have more
OS/compiler information and also an easier to find report of success or failure.
Also, the tests results are coming out in screwy orders (sterr,stout issues I
expect), and this should be fixed.  I think a custom unittest report class would
solve a lot of this.

Add compiler options (cc, gcc, kgcc, etc.) for the individual packages.  Some
people are reporting that SciPy built with compiler X doesn't play well with
Python or Numeric built with compiler Y.  It'd be nice set the type of compiler
each package is built with.

Atlas.  Right now the script is hardcoded to download some atlas libraries I
built for RH on PIII.  This is obviously not portable.  Automating the atlas
build is also hard because it requires user input.  I think we could replace
config.c with our own config.c that just removed user input request and used the
default values.  That wouldn't be hard.  We'd also need to make config.c emit
the directory where it was going to put the library files in an easy to parse
way so that we'd know where to copy them from.

An alternative approach is to set up a repository of precompiled atlas libaries
(similar to the one I have now).  This could go on SciPy, and we could use some
standard naming convention for OS/architecture.

I think we should do both...

Testing against the OS installed Python instead of a locally built one.
This is probably pretty important -- especially for windows.  I haven't set this
up yet.  The test always use a version of Python built by the tests.

CVS testing.
Right now, the test structure only can build from tar balls.

Move from email to a web interface.
The reports should really be summarized on the web in a table.

Agent based?
It be nice to have a web of test machines that people could "tell" to start
testing the latest CVS.  This would allow people to find out if their latest
changes break other OSes.  Shouldn't be to difficult, but I'm also not sure how
much this is really worth.  The nightly crons are likely enough.

Speeding up tests.
It takes a while to build and then test all these packages -- more tahn 15
minutes on our Linux server.  The script already reuses previously built
versions of a tool (make the 2nd time runs very fast on Python because the files
are all built), but there may be some streamlining that would help.  Not sure
this can really be improved.

Separate .cfg file so that site dependent features are included in the CVS

Many more I'm sure.


Eric Jones <eric at enthought.com>
Enthought, Inc. [www.enthought.com and www.scipy.org]
(512) 536-1057

More information about the Scipy-dev mailing list