Robert G. Brown wrote: > On Fri, 24 Oct 2008, Geoff Jacobs wrote: > >> Or package the source packages and submit them upstream. Volunteer for a >> life of servitude! > > Well, I was thinking more of site-specific custom cuts a la this sort > of thing: > >> Usually when I build a cluster, I make local builds of MPICH2 for each >> compiler. This does not fit well with the paradigm of really any distro >> I've ever seen, which is why I leave it as a custom layer on top of e.g. >> Debian and do not package it. > > (which you wouldn't, probably, want to submit back upstream:-) but yes...
The layer isn't so much packaging as organization. >> I have yet to see a distro do multiarch really well, so for the moment I >> try to work around (or perhaps above) the system and avoid using APT/YUM >> for handling multiple architectures/compiler toolchains. > > Multiarch isn't that bad -- it requires maintaining twinned repos for > the different archs, and I'm sure both rpm and fedora distros do this > pretty much transparently for i386 and x86_64 and less so (for no > terribly good reason but fewer users) for any of the other archs out > there. But multiple compilers -- wow. Never really thought of that > one. If it were just that, yeah, I could work with different chroots. Unfortunately, the problem is not quite so simple. As I said, different compilers and different build dependencies. > Maybe I should ask an actual question on the yum list (which, after all, > I endure listening to on a daily basis) and see if there are any > suggestions for compiler management. What do you do, install particular > compilers per system and then need packages to match, or install all the > per-compiler packages on all systems and select the one you link to some > other way? I've worked with a trinity of GNU/PGI/Intel. Portland is the most notorious offender in terms of binary incompatibility, so I just make builds for each and shell scripting to allow each user to switch wrappers. It's really very simple, and I haven't found a need to change the method in a few years. The compilers themselves stay more-or-less static, so the builds only need updating if I change the MPI layer. I need to do some more work with OpenMPI to get a real feel for their layout, but from the indications OTW at FSU, for example, OpenMPI can be handled the same. > > rgb -- Geoffrey D. Jacobs _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf