[Numpy-discussion] [ANN] numscons 0.3.0 release

Matthew Brett matthew.brett@gmail....
Fri Jan 25 05:03:36 CST 2008


> > I've attached the build logs.  I noticed that, for atlas, you check
> > for atlas_enum.c - but do you in fact need this for the build?
> Now. I just wanted one header specific to atlas. It looks like not all
> version of ATLAS install this one, unfortunately (3.8, for example).
> > numpy.distutils seemed to be satisfied with cblas.h and clapack.h in
> > /usr/local/include/atlas.  It's no big deal to copy it from sources,
> > but was there a reason you chose that file?

> No reason, but it cannot be cblas.h (it has to be atlas specific;
> otherwise, it does not make sense). The list of headers to check can be
> empty, though.

I see.  You want to be sure from the include check that you have
actually discovered ATLAS rather than something else?  I guess that
numpy.distutils solves this by the directory naming conventions - so -
if it finds cblas.h in /usr/local/include/atlas as opposed to
somewhere else, then it assumes it has the ATLAS version?  Are these
files different for ATLAS than for other libraries?  If not, do we
need to check that they are the ATLAS headers rather than any other?

> > The test for linking to blas and lapack from atlas fails too - is this
> > a false positive?
> Hmm, if atlas check does not work, it sounds like a right negative to me
> :) If ATLAS is not detected correctly, it won't be used by blas/lapack
> checkers. Or do you mean something else ?

I mean false positive in the sense that it appears that numpy can
build and pass tests with the ATLAS I have, so excluding it seems too
stringent.  The tests presumably should correspond to something the
numpy code actually needs, rather than parts of ATLAS it can do

> > For both numpy.distutils and numpyscons, default locations of
> > libraries do not include the lib64 libraries like /usr/local/lib64
> > that us 64 bit people use.  Is that easy to fix?
> Yes, it is easy, in the sense that nothing in the checkers code is
> harcoded: all the checks internally uses BuildConfig instances, which is
> like a dictionary with default values and a restricted set of keys (the
> keys are library path, libraries, headers, etc...). Those BuildConfig
> instances are created from a config file (perflib.cfg), and should
> always be customizable from site.cfg
> The options which can be customized can be found in the perflib.cfg
> file. For example, having:
> [atlas]
> htc =
> in your site.cfg should say to CheckATLAS to avoid looking for atlas_enum.h

Thanks ...

> To make 64 bits work by default is a bit more complicated. I thought a
> bit about the problem: that's why the checkers do not use BuildConfig
> instances directly, but request them through a BuildConfigFactory. One
> problem is that I don't understand how 64 bits libraries work; more
> precisely, what is the convention of library path ? Is there a lib64 for
> each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the
> standard ones (/lib, /usr/lib) checked at all, after the 64 bits
> counterpart ?

Well, the build works fine, it's just the perflib discovery - but I
suppose that's what you meant. I think the convention is that 64 bit
libraries do indeed go in /lib64, /usr/lib64, /usr/local/li64.  My
guess is that only 32-bit libraries should go in /lib, /usr/lib, but I
don't think that convention is always followed.  fftw, for example,
installs 64 bit libraries in /usr/local/lib on my system.  The
compiler (at least gcc) rejects libraries that are in the wrong
format, so I believe that finding 32 bit libraries will just cause a
warning and continued search.

Thanks again.  Herculean work.


More information about the Numpy-discussion mailing list