On Wed, 7 Mar 2007, Juan Camilo Hernandez wrote:
> I would like to know what server has the best performance for HPC systems
> between The Dell Poweredge 1950 (Xeon) And 1435SC (Opteron)
If you are fortunate enough to have only a couple of applications you care
about, then get one of each on loa
On Fri, 9 Mar 2007, Kozin, I (Igor) wrote:
> How many NFS daemons people are using on a dedicated
> NFS server?
We use 128 and I've since found out that, coincidentally, that is the same
number which SGI use on their NAS head units (of which we have none).
YMMV. :-)
cheers!
Chris
--
Christop
As Robert Brown (and others) so eloquently said. Nothing is better than your
actual application with your actual input files in an actual production run.
Results vary widely, and any kind of general statement could easily be proven
significantly wrong in your specific case.
Additional things
Andrew Robbie (GMail) wrote:
Hi,
I am building a small (~16) node cluster with an IB interconnect. I need
to decide whether I will buy a cheaper, dumb switch and run OpenSM, or
get a more expensive switch with a built in subnet manager. The largest
this system would every grow is 32 nodes (tw
Today, we get really good results setting the treads
to 64.
Need Mail bonding?
Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=list&sid=396546091
___
Mark Hahn wrote:
basically, 13 GB/s for a 2x2 opteron/2.8 system (peak flops would be
2*2*2*2.8=22.4, so you need 1.7 flops per byte so you need 1.7 flops
per byte to be happy.
Mmmm ... to my eye the Triad need 3 x 8 bytes = 24 bytes per 2 FLOP or
12 bytes per 1 FLOP ... FLOPs per byte se
Mark,
Thanks, that led me (with a bit of wandering) to e.g.
http://www.cs.virginia.edu/stream/top20/Balance.html.
My immediate concern is for an app that is worse than embarassingly
parallel; it can't (currently) trade memory for time, and can't really use
any memory or network effectively, by the
Great thanks. That was clear and the takeaway is that I should pay attention
to the number of memory channels per core (which may be less than 1.0)
I think the takeaway is a bit more acute: if your code is cache-friendly,
simply pay attention to cores * clock * flops/cycle.
otherwise (ie, when
Has anyone tried using the LinkSys NSLU2 (aka, the "slug") as a
server in a small demo cluster?
(http://en.wikipedia.org/wiki/NSLU2 for more info)
Seems that people get 5 MB/sec sorts of speeds with NFS or FTP. While
no ball of fire speed wise, it is an inexpensive widget that might be
handy fo
I've used this as a baseline for how many daemons to start by:
(num of mounts * num of nodes) / 20
I use nnodes/2 <= ndaemons <= nnodes. I don't see that extra
NFS kernel threads are terribly expensive, though I also haven't
tried to measure the effect.
regards, mark hahn.
__
Poweredge 1435SC
Dual Core AMD Opteron 2216 2.4GHz
3GB RAM 667MHz, 2x512MB and 2x1GB Single Ranked DIMMs
Poweredge 1950
Dual Core Intel Xeon 5130 2.0Ghz
2GB 533MHz (4x512MB), Single Ranked DIMMs
in general, the opteron will probably have an advantage for memory-intensive
codes; the core2 start
> So, yes, clock-for-clock (and for my usage) Xeon 51xxs are
> faster than
> Opterons. But, if your code hits memory *really hard* (which
> that heart
> model does), then the multiple paths to memory available to
> the Opterons
> allow them to scale better.
Yes, and this is consistent with
Joshua,
Great thanks. That was clear and the takeaway is that I should pay attention
to the number of memory channels per core (which may be less than 1.0)
besides the number of cores and the RAM/core.
What is the "ncpu" column in Table 1 (for example)? Does the 4 refer to 4
cores, and the 1 and
On Thu, 8 Mar 2007 at 11:33am, Peter St. John wrote
Those benchmarks are quite interesting and I wonder if I interpret them at
all correctly.
It would seem that the Intel outperforms it's advantage in clockspeed (1/6th
faster, but ballpark 1/3 better performance?) so the question would be
perfor
Joshua,
Those benchmarks are quite interesting and I wonder if I interpret them at
all correctly.
It would seem that the Intel outperforms it's advantage in clockspeed (1/6th
faster, but ballpark 1/3 better performance?) so the question would be
performance gain per dollar cost (which is fine); ho
On Wed, 7 Mar 2007, Olli-Pekka Lehto wrote:
I'm currently evaluating the possibility of building a ad-hoc cluster (aka.
flash mob) at a large computer hobbyist event using Linux live CDs. The
"cluster" would potentially feature well over a thousand personal computers
connected by a good GigE -
Igor,
Once upon a time there was a hardcoded limit on the number of threads an
nfsd could support. Twenty is the number I seem to recall but I haven't
researched this for awhile.
I've used this as a baseline for how many daemons to start by:
(num of mounts * num of nodes) / 20
Bill
Kozin,
On Tue, 6 Mar 2007, Juan Camilo Hernandez wrote:
Hello..
I would like to know what server has the best performance for HPC systems
between The Dell Poweredge 1950 (Xeon) And 1435SC (Opteron). Please send me
suggestions...
Here are the complete specifications for both servers:
Poweredge 1435SC
On Mar 7, 2007, at 11:12 AM, Olli-Pekka Lehto wrote:
...
So, do you think that is this a pipe dream or a feasible project?
Which path would you take to implement this?
Consider something embarrassingly parallel with a work-pool model.
Your assignment servers could be on stable machines, c
Hello!
I was looking at our NFS server performance recently
and was puzzled by the number of the daemons it was
running - 33. It might be the default for Suse 10.1
but I am not sure. It's usually recommended to set the
number to a multiple of 8 with 32 being perhaps the
most popular. I've read tha
On Tue, 6 Mar 2007 at 12:20pm, Juan Camilo Hernandez wrote
I would like to know what server has the best performance for HPC systems
between The Dell Poweredge 1950 (Xeon) And 1435SC (Opteron). Please send me
suggestions...
Here are the complete specifications for both servers:
Poweredge 1435S
I'm currently evaluating the possibility of building a ad-hoc cluster
(aka. flash mob) at a large computer hobbyist event using Linux live
CDs. The "cluster" would potentially feature well over a thousand
personal computers connected by a good GigE -network.
While thinking up ideas for potenti
Hi,
Well, it seems that this is a hot topic. I'm impressed for
the quality of the answers!
I think that since the disk server machine is not yet installed
it will be useful to do a few tests in advance.
My idea was to put this filesystem as /home, so there is going
to be a lot of traffic of sma
On Tue, Mar 06, 2007 at 12:06:07AM +1100, Andrew Robbie (GMail) wrote:
> Date: Tue, 6 Mar 2007 00:06:07 +1100
> From: "Andrew Robbie (GMail)" <[EMAIL PROTECTED]>
> To: beowulf@beowulf.org
> Subject: [Beowulf] IB switches: managed or not?
>
>
>Hi,
>I am building a small (~16) node cluster
Hi,
> I have a small (16 dual xeon machines) cluster. We are going to add
> an additional machine which is only going to serve a big filesystem via
> a gigabit interface.
>
> Does anybody knows what is better for a cluster of this size, exporting the
> filesystem via NFS or use another alternativ
Hello..
I would like to know what server has the best performance for HPC systems
between The Dell Poweredge 1950 (Xeon) And 1435SC (Opteron). Please send me
suggestions...
Here are the complete specifications for both servers:
Poweredge 1435SC
Dual Core AMD Opteron 2216 2.4GHz
3GB RAM 667MHz,
Hi,
Andrew Robbie (GMail) schrieb:
> I am building a small (~16) node cluster with an IB interconnect. I need to
> decide whether I will buy a cheaper, dumb switch and run OpenSM, or get a
> more expensive switch with a built in subnet manager. The largest this
> system would every grow is 32 nod
Andrew Robbie (GMail) wrote:
> Various vendors (integrators, not switch OEMs) have stated to me that
> managed switches are the go, and that OpenSM is (a) buggy, and (b)
> very time consuming to set up. But, a managed name brand switch seems
> to cost a lot more than a non-managed one using the Mel
28 matches
Mail list logo