David Cournapeau wrote: > No, and it never will. Parallel builds requires to build with > dependency handling. Even make does not handle it well: it works most > of the time by accident, but there are numerous problems (try for > example building lapack with make -j8 on your 8 cores machine - it > will give a bogus library 90 % of the time, because it starts building > static library with ar while some object files are still built).
You may call me naive and ignorant. Is it really that hard to archive some kind of poor man's concurrency? You don't have to parallelize everything to get a speed up on multi core machines. Usually the compile process from C/C++ file to an object files takes up most of the time. How about * assemble a list of all C/C++ source files of all extensions. * compile all source files in parallel * do the rest (linking etc.) in serial This should give a nice speed up without much work and without complex dependency analysis. Do you see a possible pit fall? I don't. Christian _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion