Re: [Beowulf] interconnect and compiler ?

2009-02-11 Thread Greg Lindahl
On Thu, Feb 12, 2009 at 05:43:53AM +0100, Vincent Diepeveen wrote: > Will it first handle all the megabyte sized packets, or give the quick > short packet already 'in between' to our "logical core 42"? Vincent, It would help if you paid attention previously when this question was answered. --

Re: [Beowulf] interconnect and compiler ?

2009-02-11 Thread Vincent Diepeveen
Hi Patrick, Interesting to know that you nowadays market ethernet cards. Still some knowledge on other companies switches you also seem to posses. Congrats. My faith in the switch and crossbars is actually quite high. Not so much in the MPI-cards however. Let's assume for now that I was spe

Re: [Beowulf] 10 GbE

2009-02-11 Thread Patrick Geoffray
Hi Igor, Igor Kozin wrote: - Switch latency (btw, the data sheet says x86 inside); AFAIK, it is using the 24-port Fulcrum chip, with has a latency of ~300ns. The 48-port models use multiple crossbars in a Clos, partially (S) or fully (SX) connected. I have never benchmarked the 48-port ver

Re: [Beowulf] What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread Robert G. Brown
On Wed, 11 Feb 2009, Greg Lindahl wrote: On Wed, Feb 11, 2009 at 05:19:11PM +, John Hearns wrote: Or use the little metal tool which comes in a bag of cage nuts. Which themselves have nice sharp edges ready to slice your fingers of course. But any big cluster demands blood sacrifice (*)

Re: [Beowulf] 10 GbE

2009-02-11 Thread Greg Lindahl
On Wed, Feb 11, 2009 at 12:57:01PM +, Igor Kozin wrote: > - Switch latency (btw, the data sheet says x86 inside); Since almost all of the "latency" is in the endpoints, the best way to measure this is with 0, 1, 2 switches between 2 nodes. If your measurements are accurate enough (look at th

Re: [Beowulf] interconnect and compiler ?

2009-02-11 Thread Patrick Geoffray
Vincent Diepeveen wrote: All such sorts of switch latencies are at least factor 50-100 worse than their one-way pingpong latency. I think you are a bit confused about switch latencies. There is the crossbar latency that is the time it takes for a packet to be decoded and routed to the right o

Re: [Beowulf] What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread Greg Lindahl
On Wed, Feb 11, 2009 at 05:19:11PM +, John Hearns wrote: > Or use the little metal tool which comes in a bag of cage nuts. Which > themselves have nice sharp edges ready to slice your fingers of > course. But any big cluster demands blood sacrifice (*) The guys I learned from use only screwdr

Re: [Beowulf] Re: What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread Andrew M.A. Cater
On Wed, Feb 11, 2009 at 05:39:23PM +0100, Kilian CAVALOTTI wrote: > On Monday 09 February 2009 21:37:23 David Mathog wrote: > > The uber-pile is a bit of a straw man. I'm pretty sure that a 40U stack > > of (typical) 1U or 2U servers would squish the one(s) on the bottom, > > Absolutely. At Stanf

Re: [Beowulf] What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread Charlie Peck
On Feb 11, 2009, at 11:56 AM, Skylar Thompson wrote: dan.kid...@quadrics.com wrote: Kilian, Well you shouldn't be using your bare fingers. Everyone has their own preferred trick. I put a small straight blade screwdriver in the hole, and then pop in the cage nut by hand using the screwdrive

Re: [Beowulf] 10 GbE

2009-02-11 Thread Scott Atchley
On Feb 11, 2009, at 7:57 AM, Igor Kozin wrote: Hello everyone, we are embarking on evaluation of 10 GbE for HPC and I was wondering if someone has already had experience with Arista 7148SX 48 port switch or/and Netxen cards. General pros and cons would be greatly appreciated and in particu

Re: [Beowulf] What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread John Hearns
2009/2/11 : > Kilian, > > Well you shouldn't be using your bare fingers. > Everyone has their own preferred trick. I put a small straight blade > screwdriver in the hole, and then pop in the cage nut by hand using the > screwdriver as a 'shoehorn' Or use the little metal tool which comes in a

RE: [Beowulf] What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread Dan.Kidger
Kilian, Well you shouldn't be using your bare fingers. Everyone has their own preferred trick. I put a small straight blade screwdriver in the hole, and then pop in the cage nut by hand using the screwdriver as a 'shoehorn' Daniel -Original Message- From: beowulf-boun...@beowulf.org [

Re: [Beowulf] What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread Skylar Thompson
dan.kid...@quadrics.com wrote: > Kilian, > > Well you shouldn't be using your bare fingers. > Everyone has their own preferred trick. I put a small straight blade > screwdriver in the hole, and then pop in the cage nut by hand using the > screwdriver as a 'shoehorn' > > Our rack kit actually

[Beowulf] Re: 10 GbE

2009-02-11 Thread Igor Kozin
an answer to my own question on Arista 7148SX latency. or rather an upper estimate. there is a mellanox white paper http://www.mellanox.com/pdf/whitepapers/wp_mellanox_en_Arista.pdf which reports the "TCP latency using standard test suite" (netperf?) as 7.36 us (Mellanox ConnectX EN -> Arista 7124S

Re: [Beowulf] Re: What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread Kilian CAVALOTTI
On Monday 09 February 2009 21:37:23 David Mathog wrote: > The uber-pile is a bit of a straw man. I'm pretty sure that a 40U stack > of (typical) 1U or 2U servers would squish the one(s) on the bottom, Absolutely. At Stanford, I took part in decommissioning a cluster the one I administered replac

Re: [Beowulf] What is the right lubricant for computer rack sliding rails?yh

2009-02-11 Thread Kilian CAVALOTTI
On Monday 09 February 2009 22:33:48 Greg Lindahl wrote: > (After the first 100 cage nuts, and about 3 boxes of Band-Aid... Those cage nuts have such a tendency to slice through your finger pulp, I always thought their use should be restricted by international treaties. How cool is it to apply

Re: [Beowulf] Connecting two 24-port IB edge switches to core switch:extra switch hop overhead

2009-02-11 Thread Ivan Oleynik
Tom, Thanks for your reply. As I explained in my original email 48-port IB switch would be ideal because the jobs on these 36 nodes will mostly be run locally within the 36-node complex. However, 48-port IB switch is too expensive, that is why I am considering alternative cost-effective solutions.

Re: [Beowulf] 10 GbE

2009-02-11 Thread Carsten Aulbert
Hi Igor, Igor Kozin schrieb: > we are embarking on evaluation of 10 GbE for HPC and I was wondering if > someone has already had experience with Arista 7148SX 48 port switch > or/and Netxen cards. General pros and cons would be greatly appreciated > and in particular > - Switch latency (btw, the

[Beowulf] 10 GbE

2009-02-11 Thread Igor Kozin
Hello everyone, we are embarking on evaluation of 10 GbE for HPC and I was wondering if someone has already had experience with Arista 7148SX 48 port switch or/and Netxen cards. General pros and cons would be greatly appreciated and in particular - Switch latency (btw, the data sheet says x86 insi

Re: [Beowulf] itanium vs. x86-64

2009-02-11 Thread Eric Thibodeau
Peter Kjellstrom wrote: On Wednesday 11 February 2009, Eric Thibodeau wrote: Tom Elken wrote: Which profilers can benefit from all this info? We have found Oprofile to be a useful text-oriented tool: http://oprofile.sourceforge.net/about/ From the Overview on this page: "OProf

Re: [Beowulf] itanium vs. x86-64

2009-02-11 Thread Peter Kjellstrom
On Wednesday 11 February 2009, Eric Thibodeau wrote: > Tom Elken wrote: > >> Which profilers can > >> benefit from all this info? > > > > We have found Oprofile to be a useful text-oriented tool: > > http://oprofile.sourceforge.net/about/ > > From the Overview on this page: > > "OProfile is a syste