Hi folks:
Short OT break. http://code.google.com/p/mpihmmer/ an MPI
implementation of HMMer 2.3.2.
Back to your regularly scheduled cluster.
Joe
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: [EMAIL PROTECTED]
web : http://www.scalableinformatics.com
phone: +1
On Wednesday 03 January 2007 08:06, Chris Dagdigian wrote:
> Both should be fine although if you are considering *PBS you should
> look at both Torque (a fork of OpenPBS I think)
That's correct, it (and ANU-PBS, another fork) seem to be the defacto queuing
systems in the state and national HPC
On Tue, Jan 02, 2007 at 06:44:50PM -0500, Robert G. Brown wrote:
>
> One of the bitches that I and many others have about all of the
> alternatives is that they are too damn complicated. Many sites -- I
> won't say most but many -- have very, very simple needs for a
> scheduler/queuing system. N
On Thu, 28 Dec 2006, Cunningham, Dave wrote:
I notice that Scyld is notable by it's absence from this discussion. Is
that due to cost, or bad/no experience, or other factors? There is a
lot of interest in it around my company lately.
Scyld is a fine choice for a cluster, but not usually for
On Tue, 2 Jan 2007, Chris Dagdigian wrote:
(3) Its likely that in the future I'll have part-time access to another
cluster of dual-boot (XP/linux) machines. The machines will default to
booting to Linux, but will occasionally (5-20 hours a week) be used as
windows workstations by a console us
For what it's worth I'm a biased Grid Engine and Platform LSF user ...
On Dec 29, 2006, at 11:40 AM, Nathan Moore wrote:
I've presently set up a cluster of 5 AMD dual-core linux boxes for
my students (at a small college). I've got MPICH running, shared
NIS/NFS home directories etc. Aft
> ...So I thought about building the HPC nodes (8+1 master) with Gentoo
>
> But then it comes the administration and maintenance burden, which for me it
> should be the less, since my main task here is research ... so browsing the
> net I found Rocks...
Years ago, I installed a small cluster
Robert G. Brown wrote:
All of this takes time, time, time. And I cannot begin to describe my
life to you, but time is what I just don't got to spare unless my life
depends on it. That's the level of triage here -- staunch the spurting
arteries first and apply CPR as necessary -- the mere compo
I've presently set up a cluster of 5 AMD dual-core linux boxes for
my students (at a small college). I've got MPICH running, shared
NIS/NFS home directories etc. After reading the MPICH installation
guide and manual, I can't say I understand how to deploy MPICH for my
students to use.
Hi every body @ Beowulf!,
Thanks for anyone' help in answering my question about "SW GIGA, What
kind?".
But, originally, my question was about the quality and reliability of the
brand of *LevelOne* SW (Unmanaged, Gigabit ports), in comparison to its
fairly low price, on one hand, and the brand of
Hi
I am trying to setup a small cluster incrementally, to run mpi programs
only. I have 4 PCs with linux fedora core, 2 with core 5, and one with
core 6, and I will install the new one with core 6.
I installed mpich2 on fedora core 6, and I can run mpd and the mpi
programs on this machine fi
I've presently set up a cluster of 5 AMD dual-core linux boxes for
my students (at a small college). I've got MPICH running, shared
NIS/NFS home directories etc. After reading the MPICH installation
guide and manual, I can't say I understand how to deploy MPICH for my
students to use.
Dear all
We have a problem of running application that are complied with MPICH. Our
Setup is a 16 Node 72 Cpu AMD Opteron cluster which has Rocks-4.1.2 and
RHEL 4.0 update 4 installed in it.
We are trying to run a benchmark with MPICH which came along with the
ROCKS installation. the run st
I notice that Scyld is notable by it's absence from this discussion. Is
that due to cost, or bad/no experience, or other factors? There is a
lot of interest in it around my company lately.
Dave Cunningham
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf O
14 matches
Mail list logo