[Numpy-discussion] parallel compilation of numpy

David Cournapeau david@ar.media.kyoto-u.ac...
Wed Feb 18 21:08:11 CST 2009

Christian Heimes wrote:
> David Cournapeau wrote:
>> No, and it never will. Parallel builds requires to build with
>> dependency handling. Even make does not handle it well: it works most
>> of the time by accident, but there are numerous problems (try for
>> example building lapack with make -j8 on your 8 cores machine - it
>> will give a bogus library 90 % of the time, because it starts building
>> static library with ar while some object files are still built).
> You may call me naive and ignorant. Is it really that hard to archive
> some kind of poor man's concurrency? You don't have to parallelize
> everything to get a speed up on multi core machines. Usually the compile
> process from C/C++ file to an object files takes up most of the time.
> How about
> * assemble a list of all C/C++ source files of all extensions.
> * compile all source files in parallel
> * do the rest (linking etc.) in serial

That's more or less how make works - it does not work very well IMHO.
And doing the above correctly in distutils may be harder than it seems:
both scons and waf had numerous problems with calling subtasks because
of race conditions in subprocess for example (both have their own module
for that).

More fundamentally though, I have no interest in working on distutils.
Not working on a DAG is fundamentally and hopelessly broken for a build
tool, and this is unfixable in distutils. Everything is wrong, from the
concepts to the UI through the implementation, to paraphrase a famous
saying. There is nothing to save IMHO. Of course, someone else can work
on it. I prefer working on a sane solution myself,



More information about the Numpy-discussion mailing list