All, just wanted to inform the community that I've just posted a public
google calendar of events for SC07. I plan on updating it with the
various public parties and gatherings during the week of Nov 10-16 out
in Reno. [I always seem to miss the really fun events...]. I thought
it might be easie
On Mon, 8 Oct 2007, Chris Samuel wrote:
> If I then run 2 x 4 CPU jobs of the *same* problem, they all run at
> 50% CPU.
With big thanks to Mark Hahn, this problem is solved. Infiniband is
exonerated, it was the MPI stack that was the problem!
Mark suggested that this sounded like a CPU affin
--- Tony Travis <[EMAIL PROTECTED]> wrote:
> We support VNC logins via SSH, and use lots of
> desktop applications. I
> realise this influences my view about what is the
> 'best' distribution,
> and why the package manager is so important. This is
> a small (92 node)
> cluster, not 'BIG' iron
Robert G. Brown wrote:
[...]
It is worth noting that (while yes, up2date sucks and has always sucked)
yum in FC 7 is a far, far cry from yum in RH 9. Dependency hell is
always a bad thing, but very, very few people have experienced it with
yum since maybe FC 4 or 5, if not earlier.
Fair point
Jon Tegner wrote:
Tony Travis wrote:
I also prefer Debian-based distro's and still run the openMosix kernel
under an Ubuntu 6.06.1 LTS server installation on our Beowulf cluster.
What I like about APT (the Debian package manager) is the dependency
checking and conflict resolution capabiliti
Mark Hahn wrote:
>> What I like about APT (the Debian package manager) is the dependency
checking and conflict resolution capabilities of "aptitude", which is
more robust than
>
> I'm curious - how does a conflict happen, and how is it resolved?
> I guess that this must have to do with packages
On Mon, 8 Oct 2007, Buccaneer for Hire. wrote:
My personal favorite? My laptop runs Fedora 7.
Yeah, mine too...;-)
My own experience regarding back vs forward porting --
In many cases one simply cannot backport, because the libraries you need
aren't there and ain't a-gonna be there unless y
"advantages". There is a narrow line between stability and stagnation,
and you have to figure out which side of that line your cluster will
fall on. Specifically, the fact that Centos/RHEL is frozen for two year
intervals has two disadvantages for some people:
I think it's wise to always assum
On Mon, 8 Oct 2007, Mike Davis wrote:
Robert G. Brown wrote:
On Mon, 8 Oct 2007, Mike Davis wrote:
My experience is similar to Bill's. We've been using CentOs 3,4 for the
past few years on our larger clusters. It is a good choice for stability,
good performance, and since it is RH for SW com
When you have spent multi-millions of dollars writing
and maintaining internal code, one's options become
limited. Add to that the fact that we are in the
middle of a software/technique morph (as I have
mentioned in other posts) and you find you have to
make trade offs.
For most of our cluster we
Buccaneer for Hire. wrote:
*Sigh* The best distro is the one that gets the most
of YOUR work done in a given amount of time.
... without you pulling out your remaining hair (for we the folicly
challenged/diminshed) in order to be able to start doing your work in
the first place.
Distrib
Buccaneer for Hire. wrote:
You should use what works best for you.
But, building software on RHEL/CentOS is way more
difficult for the most part than building software
under Fedora. That's the difference between ~1200
programs and thousands of programs in a distro.
It might be a problem
--- Gerry Creager <[EMAIL PROTECTED]> wrote:
> Buccaneer for Hire. wrote:
> > --- Mike Davis <[EMAIL PROTECTED]> wrote:
> >
> >> I don't see this as a problem in a production
> >> cluster. The fact is that
> >> I've been doing this stuff for a little over two
> >> decades and I can build
> >>
Buccaneer for Hire. wrote:
--- Mike Davis <[EMAIL PROTECTED]> wrote:
I don't see this as a problem in a production
cluster. The fact is that
I've been doing this stuff for a little over two
decades and I can build
anything that I need for an application. For me a
manual library build
for Cen
--- Mike Davis <[EMAIL PROTECTED]> wrote:
> I don't see this as a problem in a production
> cluster. The fact is that
> I've been doing this stuff for a little over two
> decades and I can build
> anything that I need for an application. For me a
> manual library build
> for CentOs 3 is easier
Mike Davis wrote:
Robert G. Brown wrote:
On Mon, 8 Oct 2007, Mike Davis wrote:
My experience is similar to Bill's. We've been using CentOs 3,4 for
the past few years on our larger clusters. It is a good choice for
stability, good performance, and since it is RH for SW compatability.
The onl
--- Greg Lindahl <[EMAIL PROTECTED]> wrote:
> On Mon, Oct 08, 2007 at 08:48:55AM -0500, Barnet
> Wagman wrote:
>
> > Does any one use Centos on Beowulf nodes? Of
> course Centos is really
> > just Redhat, but many people prefer it for use on
> servers.
>
> >From what I can tell, CentOS is the
Robert G. Brown wrote:
On Mon, 8 Oct 2007, Mike Davis wrote:
My experience is similar to Bill's. We've been using CentOs 3,4 for
the past few years on our larger clusters. It is a good choice for
stability, good performance, and since it is RH for SW compatability.
The only thing I'd comment
On Mon, 8 Oct 2007, Mark Hahn wrote:
Does any one use Centos on Beowulf nodes? Of course Centos is really just
Redhat, but many people prefer it for use on servers.
We have several sites using Scientific Linux, which is along the same lines
as CentOS.
I was surprised how very much like cen
On Mon, 8 Oct 2007, Tony Travis wrote:
What I like about APT (the Debian package manager) is the dependency checking
and conflict resolution capabilities of "aptitude", which is more robust than
the older "apt-get". I previously ran Red Hat 5.3->9 and I've used both
"up2date" and "yum". Neithe
It's almost identical to CentOS, and the idea is to knock off the
nameplate to allow non-proprietary distribution of the stable RHEL stuff.
gc
Mark Hahn wrote:
Does any one use Centos on Beowulf nodes? Of course Centos is really
just Redhat, but many people prefer it for use on servers.
We
On Mon, 8 Oct 2007, Mike Davis wrote:
My experience is similar to Bill's. We've been using CentOs 3,4 for the past
few years on our larger clusters. It is a good choice for stability, good
performance, and since it is RH for SW compatability.
The only thing I'd comment on that is negative abo
Mark Hahn wrote:
up-to-date. from a quick glance at the SL-5.0 readme, the number
of customizations is quite small, so I do wonder what the point is.
(_not_ meant as a criticism!).
SL exists to populate the huge data centres at CERN and Fermilab,
and as a consequence many, many HEP groups ha
Does any one use Centos on Beowulf nodes? Of course Centos is really just
Redhat, but many people prefer it for use on servers.
We have several sites using Scientific Linux, which is along the same lines
as CentOS.
I was surprised how very much like centos - I had the impression
SL was more
On Mon, Oct 08, 2007 at 08:48:55AM -0500, Barnet Wagman wrote:
> Does any one use Centos on Beowulf nodes? Of course Centos is really
> just Redhat, but many people prefer it for use on servers.
>From what I can tell, CentOS is the #1 distro for clusters. Most folks
are familiar with Red Hat-st
Barnet Wagman wrote:
Does any one use Centos on Beowulf nodes? Of course Centos is really
just Redhat, but many people prefer it for use on servers.
We have several sites using Scientific Linux, which is along the same
lines as CentOS.
___
Beowulf
Tim Cutts wrote:
[...]
You are lighting the blue touchpaper. Basically anything will work.
There's much less difference between Linux distributions than people
think. They basically differ in the way you install packages, and in
some cases in the locations of configuration files. But that'
> -Original Message-
> [mailto:[EMAIL PROTECTED] On Behalf Of Chris Samuel
> Sent: Sunday, October 07, 2007 10:25 PM
> To: beowulf@beowulf.org
> Subject: [Beowulf] Odd Infiniband scaling behaviour
>
> Hi fellow Beowulfers..
>
> We're currently building an Opteron based IB cluster, and are
On 8 Oct 2007, at 4:21 pm, Mark Hahn wrote:
the distribution has nothing to do with your hardware.
just choose a distro that you are comfortable with - there cannot
possibly be any general answer, since all extremes of personal/
professional preference are represented.
personally, I choose
Does any one use Centos on Beowulf nodes? Of course Centos is really just
Redhat, but many people prefer it for use on servers.
sure. my organization is using centos wherever possible. we have some
history with RH-like distros, and a large installed base of HP's XC,
which is RHEL-based. wh
with 8 nodes (8xCPUs, Intel 6600 Quadcore 8MB, 8GB RAM )and
a Dell-Server as the master node (2xCPU Xeon Quad Core 1.6GHz, 4TB Hard, 18GB
RAM).
Which linux distribution would be ideal for our case?
the distribution has nothing to do with your hardware.
just choose a distro that you are comforta
My experience is similar to Bill's. We've been using CentOs 3,4 for the
past few years on our larger clusters. It is a good choice for
stability, good performance, and since it is RH for SW compatability.
Mike Davis
Bill Rankin wrote:
Yes, we use it with good effect on our 500+ node cluster
Yes, we use it with good effect on our 500+ node cluster at Duke.
It's currently running Centos-4. I think that the only issue is that
some of our developers require newer releases of a couple packages,
but it's easy enough to maintain a local yum repository with those
packages.
It's be
Does any one use Centos on Beowulf nodes? Of course Centos is really
just Redhat, but many people prefer it for use on servers.
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beo
I agree with Jacob. You're asking a very broad question and you need to
narrow it down by determining your requirements. What distributions have
your worked with already?
I've had experience with several solutions but I got down with FAI and
Debian.
BTW it's good to see another fellow countryman
On 8 Oct 2007, at 1:51 pm, Seyed Abouzar Najafi Shoshtari wrote:
Dear beowulf experts,
we are planning to build a beowulf cluster
with 8 nodes (8xCPUs, Intel 6600 Quadcore 8MB, 8GB RAM )and
a Dell-Server as the master node (2xCPU Xeon Quad Core 1.6GHz, 4TB
Hard, 18GB
RAM).
Which linux dist
On Mon, Oct 08, 2007 at 05:21:08PM +0430, Seyed Abouzar Najafi Shoshtari wrote:
> Dear beowulf experts,
>
> we are planning to build a beowulf cluster
> with 8 nodes (8xCPUs, Intel 6600 Quadcore 8MB, 8GB RAM )and
> a Dell-Server as the master node (2xCPU Xeon Quad Core 1.6GHz, 4TB Hard, 18GB
> RA
Fedora core 6 is where I'd start today. SuSE 10.x is a very good second
choice. We've also tried ROCKS and haven't been too impressed. ROCKS
installs easily and replicates to the nodes but FC6 and kickstart is
just too easy and offers a bit more usability in our experience
gerry
Seyed A
Dear beowulf experts,
we are planning to build a beowulf cluster
with 8 nodes (8xCPUs, Intel 6600 Quadcore 8MB, 8GB RAM )and
a Dell-Server as the master node (2xCPU Xeon Quad Core 1.6GHz, 4TB Hard, 18GB
RAM).
Which linux distribution would be ideal for our case?
Thanks in advance for your help.
Let's see... what was the printer definition for that Centronics
dot-matrix lump in the store room?
Bill Rankin wrote:
On Oct 7, 2007, at 6:42 PM, Greg Lindahl wrote:
On Sun, Oct 07, 2007 at 03:10:30PM -0700, Greg Lindahl wrote:
Hm, elm doesn't compile
anymore, I wonder if anyone will not
On Oct 8, 2007, at 3:38 AM, Geoff Galitz wrote:
I would argue that the situation you describe is a result of that
particular RAID adapter or that particular make and model is just
inappropriate (no offense)
None taken.
I should have been clearer on the point I was trying to make.
First the
On Oct 7, 2007, at 6:42 PM, Greg Lindahl wrote:
On Sun, Oct 07, 2007 at 03:10:30PM -0700, Greg Lindahl wrote:
Hm, elm doesn't compile
anymore, I wonder if anyone will notice if I just delete it?
Of course, my CEO noticed about 10 minutes later!
I told him to use a real mailer, like mutt. ;
I would argue that the situation you describe is a result of that
particular RAID adapter or that particular make and model is just
inappropriate (no offense)
I have certainly seen lots of RAID arrays where multiple drives die at
approx the same time, but I find that usually:
- multiple drives
43 matches
Mail list logo