As much as I love distribution debates, I have been really trying to just let it happen and not get drawn in. I think I did a pretty good job too! ... Well, as of 5 minutes ago.

How can anyone really take a distribution that does not support its own packages for any reasonable amount of time seriously as a cluster solution? Discounting the simple to fix openssh type bugs, what happens when the core OS has a vulnerability or major bug? None of the options seem acceptable to me:

   1. let it go and don't fix
2. create the fix ourselves from either a supported src.rpm from a different distro
   3. back-port the fix myself
   3. merge a binary fix from a newer OS revision
   4. hope that someone else will do #2 or #3 and share their work
   5. upgrade the entire cluster several times during its lifetime

Maybe single (or very low count user systems) can find this model reasonable, but for any multiuser system (especially not at an *.edu) it is unreasonable. A break-in on a system being maintained like this can get people fired for very bad decisions and weak sysadmin practices. About the only saving excuse is "the postdoc or scientist did it!".

Regarding hardware support, John you hit the nail on the head. Much easier to update the kernel or modules then support a non-upstream- supported version of glibc (or other core library). Also when purchasing a cluster from a hardware integrator, we always add a comment about the system must be compatible with a default install of the distribution that we plan on using (with needed exclusions specified).

Tracking Fedora releases is masochistic both for IHV/ISV's as well as the administrators who care about security and stability.

Greg



On Apr 16, 2007, at 1:05 AM, John Hearns wrote:

Robert G. Brown wrote:
On Sun, 15 Apr 2007, John Hearns wrote:
And re. the future version of Scientific Linux, there has been debate on the list re. co-operating with CENTos and essentially using CENTos

IMO, most cluster builders will find it more advantageous to track the FC releases instead of using RHEL or Centos or things derived therefrom.
Hardware support is key, and Centos can get long in the tooth pretty
quickly in a cluster environment with any sort of annual turnover.
Bob,
   at long last I can take issue with you.
I don't agree re. Fedora. We as cluster builders have to support machines for at least three years, and are commonly requested to extend support. I don't see how we can support a distribution which has a 'live' lifetime of six months (not sure how long updates are for after that). After three years the distro is far, far out of date.


Your point re. hardware support of course is correct, and refutes my argument above. We deal with this by backporting up-to-date kernels and drivers, (and other packages such as dhcp server to RH 7.3 recently!)


If you reply that 'rolling updates' a la Debian would be possible, that would be OK if well engineered (*) on academic sites. But on commercial and secure Government sites machines are very often operated on an isolated LAN, and stability (read 'don't change things unnecessarily') is a key requirement there too.



(*) Ha. Well engineered?
Take the recent SuSE update which killed system logging on one of our clusters. SuSE update RPM for syslog-ng now requires that the syslog-ng.conf file is present (not present on the default install).
Yast quietly updates the RPM during the night last November.
System is rebooted a couple of weeks ago. We're asked to diagnose a problem - and lo and behold no system logs. (The fix is to use SuSE-config to create the syslog-ng.conf file and restart syslog)









_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

--
Greg Kurtzer
[EMAIL PROTECTED]


_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to