Hello, Ralf Wildenhues <[email protected]> writes:
> * Ludovic Courtès wrote on Mon, Mar 07, 2011 at 06:16:17PM CET: [...] >> FWIW I think the cumulative plots make sense when trying to answer the >> question “how many packages have a speedup <= X”. > > Well, if I'd ask a cumulative question I'd probably rather ask how many > packages have a speedup >= X. Small but significant difference. Hmm, OK. >> >> > I am fairly surprised GCC build times scaled so little. IIRC I've seen >> >> > way higher numbers. Is you I/O hardware adequate? >> >> >> >> I think so. :-) > > Can you please make your hardware specs available? I've had two 16-way > systems, and one could barely get more than a speedup of 2, while the > other cucked away nicely at 10 or so. See <https://plafrim.bordeaux.inria.fr/doku.php?id=plateforme:configurations:dancharia>. [...] >> >> > Did you use only -j or also -l for the per-package times (I would >> >> > recommend to not use -l). >> >> >> >> I actually used ‘-jX -lX’. What makes you think -l shouldn’t be used? >> > >> > Typically, -lX leads to waves in the load, due to the latency between >> > measure and action, and of course the retardation from the measure >> > interval. There are long periods in which processes are already done >> > but the load is still listed as too high. >> >> Right, I see. >> >> I don’t think it hindered scalability though, since as the measurements >> show that few packages scale beyond 2, even with ‘-j32 -l32’. > > I don't believe this argument. I think -l is fundamentally flawed, and > I don't see where the slowdown it causes should be bounded from below, > except for the number of total compiles (which is higher than 32 in the > case of GCC). I’ll see if I can run the whole thing without ‘-l’. I’d be surprised if all those builds suddenly scale up, though. Thanks, Ludo’. _______________________________________________ Autoconf mailing list [email protected] http://lists.gnu.org/mailman/listinfo/autoconf
