That brings me to another important question. Any hints on speccing
the head-node?
I think you imply a single, central admin/master/head node. this is
a very bad idea. first, it's generally a bad idea to have users on
a fileserver. next, it's best to keep cluster-infrastructure
(monitoring
Rahul Nabar wrote:
That brings me to another important question. Any hints on speccing
the head-node? Especially the kind of storage I put in on the head
For a cluster of this size, divide and conquer. Head node to handle
cluster admin. Create login nodes for users to access to handle build
On Wed, 2 Sep 2009 at 10:29pm, Rahul Nabar wrote
That brings me to another important question. Any hints on speccing
the head-node? Especially the kind of storage I put in on the head
node. I need around 1 Terabyte of storage. In the past I've uses
RAID5+SAS in the server. Mostly for running job
Hi,
please see below
On Wed, Sep 2, 2009 at 6:57 PM, Mark Hahn wrote:
> On Wed, 2 Sep 2009, amjad ali wrote:
>
> Hi All,
>> I have 4-Nodes ( 4 CPUs Xeon3085, total 8 cores) Beowulf cluster on
>> ROCKS-5
>> with GiG-Ethernet. I tested runs of a 1D CFD code both serial and parallel
>> on it.
>> P
On Wed, Sep 2, 2009 at 5:41 PM, Mark Hahn wrote:
>> allows global cross mounts from ~300 compute nodes) There is a variety
>> of codes we run; some latency sensitive and others bandwidth
>> sensitive.
>
> if you're sensitive either way, you're going to be unhappy with Gb.
I am still testing sensit
allows global cross mounts from ~300 compute nodes) There is a variety
of codes we run; some latency sensitive and others bandwidth
sensitive.
if you're sensitive either way, you're going to be unhappy with Gb.
IMO, you'd be best to configure your scheduler to never spread an MPI
job across swit
What are good choices for a switch in a Beouwulf setup currently? The
last time we went in for a Dell PowerConnect and later realized that
this was pretty basic.
I only have gigabit on the compute nodes. So no Infiniband / Myrinet
etc. issues. The point is that I will have about 300 compute nodes.
On Wed, Sep 2, 2009 at 10:07 AM, Kilian
CAVALOTTI wrote:
>
> You could also consider H+4 on-site intervention for critical parts,
> like switches, master nodes, or whatever piece of hardware your whole
> cluster operation depends on.
Good idea! I will do that for some of the critical, non-replica
Greg Lindahl wrote:
> As for people's vibrations comments: they own a bunch of them and they
> work...
For now, I've seen similar setups last 6-12 months before a drive drops, then
a rebuild triggers drop #2.
> but that is only a single point of evidence and not a history
> of working with a vari
On Wed, Sep 02, 2009 at 02:43:46PM -0300, Bruno Coutinho wrote:
> According to this site, the main difference between Seagate desktop and ES
> series is that the latter are more vibration resistant.
> http://techreport.com/articles.x/10748
This is interesting -- a non-firmware difference between
2009/9/2 Eugen Leitl
> On Tue, Sep 01, 2009 at 04:28:10PM -0700, Bill Broadley wrote:
>
> > I'm very curious to hear how they are in production. I've had vibration
> of
>
> My thoughts exactly.
>
> > large sets of drives basically render the consumer drives useless.
> Timeouts,
> > highly varia
Hi Rahul,
On Tue, Sep 1, 2009 at 6:03 PM, Rahul Nabar wrote:
> In the past we've stuck to standard vendor contracts; something like:
> "1 year warranty; 2 year extended warranty. Next Business Day on
> site."
You could also consider H+4 on-site intervention for critical parts,
like switches, mast
On Wed, 2 Sep 2009, amjad ali wrote:
Hi All,
I have 4-Nodes ( 4 CPUs Xeon3085, total 8 cores) Beowulf cluster on ROCKS-5
with GiG-Ethernet. I tested runs of a 1D CFD code both serial and parallel
on it.
Please reply following:
1) When I run my serial code on the dual-core head node (or parallel
Hi All,
I have 4-Nodes ( 4 CPUs Xeon3085, total 8 cores) Beowulf cluster on ROCKS-5
with GiG-Ethernet. I tested runs of a 1D CFD code both serial and parallel
on it.
Please reply following:
1) When I run my serial code on the dual-core head node (or parallel code
with -np 1); it gives results in a
Bill Broadley wrote:
The lid screws down to apply pressure to a piece of foam. Foam presses down
on 45 drives. 5 drives (1.4 lb each) sit on each port multipliers. 6 nylon
mounts support each multiplier supporting 7 pounds of drives. Seems like the
damping from the nylon mounts would be mini
On Wednesday 02 September 2009 11:10:26 Bill Broadley wrote:
>
> Anyone familiar with what the sun thumper does to minimize vibration?
Each disk is contained in a cage and this one is secured per slot. Pretty
standard layout, but then I've never really checked if there were vibrational
"hot" sp
On Wednesday 02 September 2009 11:23:46 Eugen Leitl wrote:
> On Tue, Sep 01, 2009 at 04:28:10PM -0700, Bill Broadley wrote:
> > I'm very curious to hear how they are in production. I've had vibration
> > of
>
> My thoughts exactly.
>
> > large sets of drives basically render the consumer drives us
Eugen Leitl wrote:
> On Tue, Sep 01, 2009 at 04:28:10PM -0700, Bill Broadley wrote:
>
>> I'm very curious to hear how they are in production. I've had vibration of
>
> My thoughts exactly.
The lid screws down to apply pressure to a piece of foam. Foam presses down
on 45 drives. 5 drives (1.4
On Tue, Sep 01, 2009 at 04:28:10PM -0700, Bill Broadley wrote:
> I'm very curious to hear how they are in production. I've had vibration of
My thoughts exactly.
> large sets of drives basically render the consumer drives useless. Timeouts,
> highly variable performance, drives constantly dropp
19 matches
Mail list logo