Hello all, Now that I have a hammer and everything looks like a nail - in that I've built a new home-NAS with cache and log on SSDs (my first, heh) and over-NFS compilations should be faster than previously possible on HDD-based rigs with VirtualBox VMs and NFS remote compile nodes...
So, I am wondering whether (and how?) the dmake used in the illumos-gate compilation process can be used to distribute the compilation load over several compute nodes? Namely, the storage box is not a super-computer (an N54L) and much of its CPU is spent on processing the data pool (pretty quick though), but at home it is surrounded by a number of desktops which in theory could each run a VirtualBox with OI inside and provide their CPU time to take part in a distributed compilation of a large project such as the gate. Previously I'd say this would bottleneck on networked IO, now I hope this barrier is gone. Has anyone done that recently? Is the "d" in "dmake" used by anyone in the community? Does the idea have its merits in i.e. reduced compilation time? Should all building environments be set up identically (arch, compiler, etc.) or what? How-to's? :) Also, if such setups are used, are they "rigid" in the set of predefined available compilation nodes which should all be up, or just a subset of whichever ones are available can be used dynamically (i.e. VMs are fired up on PCs with no immediate load from users, and turned off in case of heavy load like gaming)? Thanks, //Jim Klimov _______________________________________________ oi-dev mailing list [email protected] http://openindiana.org/mailman/listinfo/oi-dev
