Quoting David Brown <da...@westcontrol.com>:

I presume fixing the slow, inefficient serial nature of autotools is a
hard job - otherwise it would have been done already.  But any
improvements there could have big potential benefits.

I think it's more likely because it is tedious and time-consuming to do,
and is not really high enough on any one person's priority list.

The likelyhood that a test depends on the outcome of the last few n tests
is rather low.  So you could tests speculatively with an incomplete set of
defines, and then re-run them when you have gathered all the results from
the preceding tests to verify that you have computed the right result.
Backtrack if necessary.

Also, some parts of the autotools could probably be sped up if you used
compiled C programs instead of shell scripts and m4.  Particularily if
we add test result sync / merge tasks as outlined above. This compiled
autotools building might in turn need some old-style autoconf, and only
be suitable for a subset of systems, but with the heavy usage of autotools
that we have today, the build cost would be quickly amortized on a typical
developer's machine.
Reminds me a bit of the evolution of the Usenet news software C-News.

Reply via email to