On Thu, Feb 12, 2009 at 05:43:53AM +0100, Vincent Diepeveen wrote:
> Will it first handle all the megabyte sized packets, or give the quick
> short packet already 'in between' to our "logical core 42"?
Vincent,
It would help if you paid attention previously when this question was
answered.
--
Hi Patrick,
Interesting to know that you nowadays market ethernet cards. Still
some knowledge on other companies switches
you also seem to posses. Congrats.
My faith in the switch and crossbars is actually quite high. Not so
much in the MPI-cards however.
Let's assume for now that I was spe
Hi Igor,
Igor Kozin wrote:
- Switch latency (btw, the data sheet says x86 inside);
AFAIK, it is using the 24-port Fulcrum chip, with has a latency of
~300ns. The 48-port models use multiple crossbars in a Clos, partially
(S) or fully (SX) connected. I have never benchmarked the 48-port
ver
On Wed, 11 Feb 2009, Greg Lindahl wrote:
On Wed, Feb 11, 2009 at 05:19:11PM +, John Hearns wrote:
Or use the little metal tool which comes in a bag of cage nuts. Which
themselves have nice sharp edges ready to slice your fingers of
course. But any big cluster demands blood sacrifice (*)
On Wed, Feb 11, 2009 at 12:57:01PM +, Igor Kozin wrote:
> - Switch latency (btw, the data sheet says x86 inside);
Since almost all of the "latency" is in the endpoints, the best way to
measure this is with 0, 1, 2 switches between 2 nodes. If your
measurements are accurate enough (look at th
Vincent Diepeveen wrote:
All such sorts of switch latencies are at least factor 50-100 worse than
their one-way pingpong latency.
I think you are a bit confused about switch latencies.
There is the crossbar latency that is the time it takes for a packet to
be decoded and routed to the right o
On Wed, Feb 11, 2009 at 05:19:11PM +, John Hearns wrote:
> Or use the little metal tool which comes in a bag of cage nuts. Which
> themselves have nice sharp edges ready to slice your fingers of
> course. But any big cluster demands blood sacrifice (*)
The guys I learned from use only screwdr
On Wed, Feb 11, 2009 at 05:39:23PM +0100, Kilian CAVALOTTI wrote:
> On Monday 09 February 2009 21:37:23 David Mathog wrote:
> > The uber-pile is a bit of a straw man. I'm pretty sure that a 40U stack
> > of (typical) 1U or 2U servers would squish the one(s) on the bottom,
>
> Absolutely. At Stanf
On Feb 11, 2009, at 11:56 AM, Skylar Thompson wrote:
dan.kid...@quadrics.com wrote:
Kilian,
Well you shouldn't be using your bare fingers.
Everyone has their own preferred trick. I put a small straight
blade screwdriver in the hole, and then pop in the cage nut by
hand using the screwdrive
On Feb 11, 2009, at 7:57 AM, Igor Kozin wrote:
Hello everyone,
we are embarking on evaluation of 10 GbE for HPC and I was wondering
if someone has already had experience with Arista 7148SX 48 port
switch or/and Netxen cards. General pros and cons would be greatly
appreciated and in particu
2009/2/11 :
> Kilian,
>
> Well you shouldn't be using your bare fingers.
> Everyone has their own preferred trick. I put a small straight blade
> screwdriver in the hole, and then pop in the cage nut by hand using the
> screwdriver as a 'shoehorn'
Or use the little metal tool which comes in a
Kilian,
Well you shouldn't be using your bare fingers.
Everyone has their own preferred trick. I put a small straight blade
screwdriver in the hole, and then pop in the cage nut by hand using the
screwdriver as a 'shoehorn'
Daniel
-Original Message-
From: beowulf-boun...@beowulf.org [
dan.kid...@quadrics.com wrote:
> Kilian,
>
> Well you shouldn't be using your bare fingers.
> Everyone has their own preferred trick. I put a small straight blade
> screwdriver in the hole, and then pop in the cage nut by hand using the
> screwdriver as a 'shoehorn'
>
>
Our rack kit actually
an answer to my own question on Arista 7148SX latency. or rather an upper
estimate.
there is a mellanox white paper
http://www.mellanox.com/pdf/whitepapers/wp_mellanox_en_Arista.pdf
which reports the "TCP latency using standard test suite" (netperf?) as 7.36
us
(Mellanox ConnectX EN -> Arista 7124S
On Monday 09 February 2009 21:37:23 David Mathog wrote:
> The uber-pile is a bit of a straw man. I'm pretty sure that a 40U stack
> of (typical) 1U or 2U servers would squish the one(s) on the bottom,
Absolutely. At Stanford, I took part in decommissioning a cluster the one I
administered replac
On Monday 09 February 2009 22:33:48 Greg Lindahl wrote:
> (After the first 100 cage nuts,
and about 3 boxes of Band-Aid...
Those cage nuts have such a tendency to slice through your finger pulp, I
always thought their use should be restricted by international treaties. How
cool is it to apply
Tom,
Thanks for your reply. As I explained in my original email 48-port IB switch
would be ideal because the jobs on these 36 nodes will mostly be run locally
within the 36-node complex. However, 48-port IB switch is too expensive,
that is why I am considering alternative cost-effective solutions.
Hi Igor,
Igor Kozin schrieb:
> we are embarking on evaluation of 10 GbE for HPC and I was wondering if
> someone has already had experience with Arista 7148SX 48 port switch
> or/and Netxen cards. General pros and cons would be greatly appreciated
> and in particular
> - Switch latency (btw, the
Hello everyone,
we are embarking on evaluation of 10 GbE for HPC and I was wondering if
someone has already had experience with Arista 7148SX 48 port switch or/and
Netxen cards. General pros and cons would be greatly appreciated and in
particular
- Switch latency (btw, the data sheet says x86 insi
Peter Kjellstrom wrote:
On Wednesday 11 February 2009, Eric Thibodeau wrote:
Tom Elken wrote:
Which profilers can
benefit from all this info?
We have found Oprofile to be a useful text-oriented tool:
http://oprofile.sourceforge.net/about/
From the Overview on this page:
"OProf
On Wednesday 11 February 2009, Eric Thibodeau wrote:
> Tom Elken wrote:
> >> Which profilers can
> >> benefit from all this info?
> >
> > We have found Oprofile to be a useful text-oriented tool:
> > http://oprofile.sourceforge.net/about/
> > From the Overview on this page:
> > "OProfile is a syste
21 matches
Mail list logo