[SciPy-dev] automatic test script for scipy

pearu at scipy.org pearu at scipy.org
Sat Apr 13 05:04:11 CDT 2002


Eric,

On Fri, 12 Apr 2002, eric wrote:

> > May be the following hack is simpler:
> >
> > cd /tmp/dir
> > rm -rf ATLAS     # must start from a clean src to avoid config problems
> > tar xzf /path/to/atlas3.3.14.tar.gz
> > cd ATLAS
> > python -c 'for i in range(100): print' | make  #everything is default
> > ARCH=`python -c 'import glob;print glob.glob("Make*UNKNOWN")[0][5:]'`
> > make install arch=$ARCH
> 
> That's a very good solution -- except... On some platforms -- like our RH 7.1
> machine, there is one question where you have to answer "no" instead of
> yes(which is default).  That is because RH uses a 2.96 compiler which doesn't
> produce optimal code for ATLAS.  Even when I specify make CC=kgcc and the
> config.c file is built with kgcc, the process still detects 2.96 and complains
> about it.  We need to figure out a way around this.

That should be easy. I started to use the following script to build my
local ATLAS library:

-----------------------------------
ATLAS_VERS=3.3.14
ATLAS_SRC=atlas$ATLAS_VERS.tar.gz

TMPDIR=`mktemp -d`
cp -v ./$ATLAS_SRC $TMPDIR || exit 1
cd $TMPDIR
echo "Unpacking"
tar xzf atlas$ATLAS_VERS.tar.gz
cd ATLAS || exit 1
# Configure ATLAS:
python -c 'for a in [""]*4+["4"]+[""]*3+["/usr/bin/g77","-Wall \
-fno-second-underscore  -fpic  -O3 -funroll-loops  -march=i686 \
-malign-double"]+[""]*10: print a' | make
ARCH=`python -c 'import glob;print glob.glob("Make*_*")[0][5:]'`
make install arch=$ARCH
-----------------------------------

where 5th option "4" corresponds to setting PII and with 9th option you
can change your compiler. In 10th option I have supplied some optimization
flags but they are not necessary. 
This gives you an idea how to fix compiler under RH.

> > Other thing: it is not complete clear to me whether these automatic
> > test hooks are only to be used from enthought machines (from a cron
> > job) or should they be used by developers as well? Build everything from
> > scratch can take hours on others machines.
> 
> Please use them whereever you like.  I think now that it is spewing on 20K or so
> of output, the mailing list will give us a reasonable feel for how scipy is
> doing on multiple platforms.  I'm not sure the mailing list is the best format
> for this, but it was quick and dirty and gives everyone access to the data.
> Later we can beautify this whole process and perhaps get rid of the mailing
> list.

Mailing list is fine for me. Could you add the real version number of
scipy to the subject line instead of just a snapshot?

BTW, have you thought about making scipy-cvs list? I have send you patches
a while ago.

> I'd like to set up some more scenarios such as testing against a machines
> current installations also.  This would be faster.  It should also be pretty
> simple -- detect the python version, build anything it is missing into some
> tempdir (numeric, f2py, atlas, whatever), and then build scipy.  It is all just
> logistics.

Great.

> Also, if others choose to run this, we may want to hack the scripts some so that
> machine name, and more diagnostics are returned.
> 
> By the way, did you get this to run on your machine at all Pearu?  I'd be
> interested to learn what needs to be re-factored to get less specific to our
> network.

No, I didn't try. It was a bit late here. 
Obviously

local_repository = "/home/shared/tarballs"
local_mail_server = "enthought.com"

are specific to your network but these are minor issues. Some questions
arised for me, however:

Is it correct that if local_repository contains sources to all required
software (with the specified version numbers) then nothing is downloaded
from Internet?

Another note: if specified software is once installed to dst_dir, then why
not to keep it there instead of removing it after tests? This would avoid
re-compilation if nothing is changed in the software. I guess dst_dir must
then include the version numbers. For example, if python is installed with
the following command

  setup.py install --prefix=/dst_dir/Python-2.1.3

then subsequent installation commands for packages would be

  cd <f2py src> /dst_dir/Python-2.1.3/python setup.py install
  cd <Numeric src>
  /dst_dir/Python-2.1.3/python setup.py install \
    --prefix=/dst_dir/Python-2.1.3/Numeric-<numpy_ver>

and testing would be executed in the following loop

for py_ver in ['2.1.3',..]:
  for numpy_ver ['18.3',..]:
    for atlas_ver in [...]:
      cd <scipy src directory>
      # Install
      ATLAS=/path/to/atlas-<atlas_ver>
      PYTHONPATH=/dst_dir/Python-<py_ver>/Numeric-<numpy_ver>\
/lib/python2.1/site-packages /dst_dir/Python-<py_ver>/bin/python \
      setup.py install
      # Test
      PYTHONPATH=/dst_dir/Python-<py_ver>/Numeric-<numpy_ver>\
/lib/python2.1/site-packages /dst_dir/Python-<py_ver>/bin/python \
      -c "import scipy;scipy.test(1)"

But if all this takes too much time to implement, then we could leave it
for SciPy-0.3 and now concentrate on getting SciPy-0.2 out.
Current CVS seems to be quite stable and we should use it before it gets
unstable again due to new contributions.

I see that you want to make SciPy releases perfect (testing lots of
platforms, various combinations of software packages, etc.). 
It is a very good goal. But I think few initial releases can be a bit
imperfect (incomplete in various parts like tests, docs, etc).
;-) 

Pearu




More information about the Scipy-dev mailing list