[Beowulf] OT: Announcing MPI-HMMER

2007-01-02 Thread Joe Landman
Hi folks: Short OT break. http://code.google.com/p/mpihmmer/ an MPI implementation of HMMer 2.3.2. Back to your regularly scheduled cluster. Joe -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics LLC, email: [EMAIL PROTECTED] web : http://www.scalableinformatics.com phone: +1

Re: [Beowulf] picking out a job scheduler

2007-01-02 Thread Chris Samuel
On Wednesday 03 January 2007 08:06, Chris Dagdigian wrote: > Both should be fine although if you are considering *PBS you should   > look at both Torque (a fork of OpenPBS I think) That's correct, it (and ANU-PBS, another fork) seem to be the defacto queuing systems in the state and national HPC

Re: [Beowulf] picking out a job scheduler

2007-01-02 Thread David Simas
On Tue, Jan 02, 2007 at 06:44:50PM -0500, Robert G. Brown wrote: > > One of the bitches that I and many others have about all of the > alternatives is that they are too damn complicated. Many sites -- I > won't say most but many -- have very, very simple needs for a > scheduler/queuing system. N

Re: FW: [Beowulf] Which distro for the cluster?

2007-01-02 Thread Robert G. Brown
On Thu, 28 Dec 2006, Cunningham, Dave wrote: I notice that Scyld is notable by it's absence from this discussion. Is that due to cost, or bad/no experience, or other factors? There is a lot of interest in it around my company lately. Scyld is a fine choice for a cluster, but not usually for

Re: [Beowulf] picking out a job scheduler

2007-01-02 Thread Robert G. Brown
On Tue, 2 Jan 2007, Chris Dagdigian wrote: (3) Its likely that in the future I'll have part-time access to another cluster of dual-boot (XP/linux) machines. The machines will default to booting to Linux, but will occasionally (5-20 hours a week) be used as windows workstations by a console us

Re: [Beowulf] picking out a job scheduler

2007-01-02 Thread Chris Dagdigian
For what it's worth I'm a biased Grid Engine and Platform LSF user ... On Dec 29, 2006, at 11:40 AM, Nathan Moore wrote: I've presently set up a cluster of 5 AMD dual-core linux boxes for my students (at a small college). I've got MPICH running, shared NIS/NFS home directories etc. Aft

Re: [Beowulf] Which distro for the cluster?

2007-01-02 Thread Vaclav Hanzl
> ...So I thought about building the HPC nodes (8+1 master) with Gentoo > > But then it comes the administration and maintenance burden, which for me it > should be the less, since my main task here is research ... so browsing the > net I found Rocks... Years ago, I installed a small cluster

Re: [Beowulf] Which distro for the cluster?

2007-01-02 Thread Jon Tegner
Robert G. Brown wrote: All of this takes time, time, time. And I cannot begin to describe my life to you, but time is what I just don't got to spare unless my life depends on it. That's the level of triage here -- staunch the spurting arteries first and apply CPR as necessary -- the mere compo

[Beowulf] picking a job scheduler

2007-01-02 Thread Nathan Moore
I've presently set up a cluster of 5 AMD dual-core linux boxes for my students (at a small college). I've got MPICH running, shared NIS/NFS home directories etc. After reading the MPICH installation guide and manual, I can't say I understand how to deploy MPICH for my students to use.

Re: [Beowulf] SW Giaga, what kind?

2007-01-02 Thread Ruhollah Moussavi Baygi
Hi every body @ Beowulf!, Thanks for anyone' help in answering my question about "SW GIGA, What kind?". But, originally, my question was about the quality and reliability of the brand of *LevelOne* SW (Unmanaged, Gigabit ports), in comparison to its fairly low price, on one hand, and the brand of

[Beowulf] mpich mpd ring on a network of 2 pcs

2007-01-02 Thread Manal Helal
Hi I am trying to setup a small cluster incrementally, to run mpi programs only. I have 4 PCs with linux fedora core, 2 with core 5, and one with core 6, and I will install the new one with core 6. I installed mpich2 on fedora core 6, and I can run mpd and the mpi programs on this machine fi

[Beowulf] picking out a job scheduler

2007-01-02 Thread Nathan Moore
I've presently set up a cluster of 5 AMD dual-core linux boxes for my students (at a small college). I've got MPICH running, shared NIS/NFS home directories etc. After reading the MPICH installation guide and manual, I can't say I understand how to deploy MPICH for my students to use.

[Beowulf] running MPICH on AMD Opteron Dual Core Processor Cluster( 72 Cpu's)

2007-01-02 Thread Vadivelan Rathinasabapathy
Dear all We have a problem of running application that are complied with MPICH. Our Setup is a 16 Node 72 Cpu AMD Opteron cluster which has Rocks-4.1.2 and RHEL 4.0 update 4 installed in it. We are trying to run a benchmark with MPICH which came along with the ROCKS installation. the run st

FW: [Beowulf] Which distro for the cluster?

2007-01-02 Thread Cunningham, Dave
I notice that Scyld is notable by it's absence from this discussion. Is that due to cost, or bad/no experience, or other factors? There is a lot of interest in it around my company lately. Dave Cunningham -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf O