<disclaimer> I work for Penguin and it's my job to sell Scyld, however the following also represents my personal opinion. </disclaimer>
The issue you mentioned below is why I love the light weight compute node concept of Scyld, which installs on top of redhat enterprise 4. While it does still install everything that redhat 4 would install on the headnode that you interact with, the compute nodes will not get any of that. The compute nodes just boot precisely what is needed, the kernel, the modules for the hardware detected, and only when an app is migrated out at runtime it will pull over additional required libraries, caching them for next time. It boots directly into RAM (no nfs-root overhead). I just ran 'free' on a compute node of one of our demo clusters that has been up for several months and it still only uses 140MB of it's 2G - the only process running on it currently is portmap since it has /home nfs-mounted. The downside is that you have to configure the environment for specific applications that expect everything to be there, i.e. if they do system("/usr/bin/perl") and runtime and what-not, that is when you start giving up some of the pure principle and solve it by i.e. mounting /usr via NFS or predeploying out stuff. Michael Will SE Technical Lead Penguin Computing Inc. -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joe Landman Sent: Monday, April 16, 2007 8:40 AM To: Robert G. Brown Cc: beowulf@beowulf.org Subject: Re: [Beowulf] SGI to offer Windows on clusters Robert G. Brown wrote: > I've always liked the idea of the core remaining a VERY marginal set > that is pretty much "just enough" to bootstrap an install. One of the Hmmm.... I indicated this some time ago and got some grief over this. Few of the distro makers seem to like this concept. I want the unit up, running all drivers needed (and only those drivers needed), with the network, and a barebones admin package (ssh, ipmitools, ...). Getting there with most prebuilt distros is excruciatingly hard. I have gotten SuSE down to a 1.4 GB install, bare minimum that I can make it and satisfy all dependencies, and have a functional compute node. More if I need to get the 32 bit packages there. RH I have not seen an install below about 2GB. Rocks tries to do a minimal install, and yet you see packages that largely won't be used on nodes being installed in order to satisfy dependencies of packages that will be used. This is not the Rocks people's fault, it is the underlying distribution. You can use DSL and other "small" distros as long as you don't mind using ancient kernels, missing drivers, ... This unfortunately doesn't satisfy the needs. Caos3 should (in theory, haven't tried it yet) handle most of this, be a very tight install, limited package dependency radius, and still be quite usable/fast. > things that from time immemorial has bugged me about the red hat > install process is its complete lack of robustness, so that if for any > reason it fails in midstream, one pretty much has to start over. This > has always YES!!!!! These problems plague every RH derived distro as well. FC derived as well. Anaconda may be great at many things. Robustly installing software and configuring systems is not one of them. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics LLC, email: [EMAIL PROTECTED] web : http://www.scalableinformatics.com phone: +1 734 786 8423 fax : +1 734 786 8452 cell : +1 734 612 4615 _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf