Re: [Beowulf] confidential data on public HPC cluster

2010-03-15 Thread Nifty Tom Mitchell
On Mon, Mar 01, 2010 at 11:29:49AM -0500, Jonathan Dursi wrote: > > Hi; > > We're a fairly typical academic HPC centre, and we're starting to > have users talk to us about using our new clusters for projects that > have various requirements for keeping data confidential. "Various requirements"

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Greg Lindahl
On Mon, Mar 15, 2010 at 05:03:14PM -0700, Tom Elken wrote: > QLogic MPI does not have a message coalescing feature, and that is > what we use to measure MPI message rate on our IB adapters. Thank you for making that clear, Tom. -- greg ___ Beowulf ma

RE: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Mark Hahn
I don???t appreciate those kind of responses and it is not appropriate for this mailing list. Please fix in future emails. I am your assert some numbers, perhaps correct, but Patrick provides useful explanations. I prefer the latter. system can provide 3300MB/s uni or >6500MB bi dir. Of cou

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Greg Lindahl
On Mon, Mar 15, 2010 at 07:30:15PM -0400, Patrick Geoffray wrote: > However, things are different for tiny packets. The minimum packet size > on Ethernet is 60 Bytes. The maximum packet rate (not coalesced !) is > 14.88 Mpps on a 10GE link, counting everything (inter-packet gap, CRC, > etc).

RE: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Tom Elken
> On Behalf Of Gilad Shainer > > ... OSU has different benchmarks > so you can measure message coalescing or real message rate. [ As a refresher for the wider audience , as Gilad defined earlier: " Message coalescing is when you incorporate multiple MPI messages in a single network packet." A

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Gilad Shainer
That's correct - Original Message - From: beowulf-boun...@beowulf.org To: Patrick Geoffray Cc: beowulf@beowulf.org Sent: Mon Mar 15 16:45:44 2010 Subject: Re: [Beowulf] Q: IB message rate & large core counts (per node)? If I understand correctly, 40GbE is 64/66 encoded. Tom On 3/15

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Tom Ammon
If I understand correctly, 40GbE is 64/66 encoded. Tom On 3/15/2010 5:30 PM, Patrick Geoffray wrote: On 3/15/2010 5:24 PM, richard.wa...@comcast.net wrote: to best and worst case). It would be good to add Ethernet to the mix (1Gb, 10Gb, and 40Gb) as well. 10 Gb Ethernet uses 8b/10b

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Patrick Geoffray
On 3/15/2010 5:24 PM, richard.wa...@comcast.net wrote: to best and worst case). It would be good to add Ethernet to the mix (1Gb, 10Gb, and 40Gb) as well. 10 Gb Ethernet uses 8b/10b with a signal rate of 12.5 Gb/s, for a raw bandwidth of 10 Gb/s. I don't know how 1Gb is encoded and 40 Gb/s is

RE: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Gilad Shainer
I don’t appreciate those kind of responses and it is not appropriate for this mailing list. Please fix in future emails. I am standing behind any info I put out, and definitely don’t do down estimations as you do. It was nice to see that you fixed your 20+20 numbers to 24+23 (that was marketing

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Patrick Geoffray
On 3/15/2010 5:33 PM, Gilad Shainer wrote: To make it more accurate, most PCIe chipsets supports 256B reads, and the data bandwidth is 26Gb/s, which makes it 26+26, not 20+20. I know Marketers lives in their own universe, but here are a few nuts for you to crack: * If most PCIe chipsets woul

Re: [Beowulf] error while using mpirun

2010-03-15 Thread Erik Andresen
Date: Fri, 12 Mar 2010 23:38:56 +0530 From: akshar bhosale Subject: [Beowulf] error while using mpirun To: beowulf@beowulf.org, torqueus...@supercluster.org when i do /usr/local/mpich-1.2.6/bin/mpicc -o test test.c ,i get test ;but when i do /usr/local/mpich-1.2.6/bin/mpirun -np 4 test,i get p

Re: [Beowulf] error while using mpirun

2010-03-15 Thread rigved sharma
hi, thanks for your solutions. i tried all solutions given by you ..still same error. can ypu please suggest any other solution? regards, akshar On Sat, Mar 13, 2010 at 9:41 AM, wrote: > > Akshar bhosale wrote: > > >When i do: > > > >/usr/local/mpich-1.2.6/bin/mpicc -o test test.c ,i get test ;b

Re: [Beowulf] error while using mpirun

2010-03-15 Thread Andrew Fitz Gibbon
On Mar 12, 2010, at 12:08 PM, akshar bhosale wrote: when i do /usr/local/mpich-1.2.6/bin/mpicc -o test test.c ,i get test ;but when i do /usr/local/mpich-1.2.6/bin/mpirun -np 4 test,i get p0_31341: p4_error: Path to program is invalid while starting /home/npsf/last with rsh on dragon: -1

RE: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Gilad Shainer
To make it more accurate, most PCIe chipsets supports 256B reads, and the data bandwidth is 26Gb/s, which makes it 26+26, not 20+20. Gilad From: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org] On Behalf Of richard.wa...@comcast.net Sent: Monday, March 15, 2010 2:25 PM

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread richard . walsh
On Monday, March 15, 2010 1:27:23 PM GMT Patrick Geoffray wrote: >I meant to respond to this, but got busy. You don't consider the protocol >efficiency, and this is a major issue on PCIe. Yes, I forgot that there is more to the protocol than the 8B/10B encoding, but I am glad to get your

Re: [Beowulf] 1000baseT NIC and PXE?

2010-03-15 Thread Michael Di Domenico
surprisingly enough there are still cards that don't come with PXE built into the embedded rom. you'll have to check the specs on the card you're interested in from the mfg website. one thing that bit me in the past, even though the card had pxe, the bios of the machine i was working on had no me

[Beowulf] 1000baseT NIC and PXE?

2010-03-15 Thread David Mathog
Sorry if this is a silly question, but do any of the inexpensive 1000baseT NICs support PXE boot? I just finished looking through the offerings on newegg and while a couple of the really really cheap ones had an empty socket for a boot rom, none of the ones without such a socket said definitively

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Patrick Geoffray
Hi Richard, I meant to reply earlier but got busy. On 2/27/2010 11:17 PM, richard.wa...@comcast.net wrote: If anyone finds errors in it please let me know so that I can fix them. You don't consider the protocol efficiency, and this is a major issue on PCIe. First of all, I would change the

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-03-15 Thread Douglas Eadline
I have placed a copy of Richard's table on ClusterMonkey in case you want an html view. http://www.clustermonkey.net//content/view/275/33/ -- Doug > > All, > > > In case anyone else has trouble keeping the numbers > straight between IB (SDR, DDR, QDR, EDR) and PCI-Express, > (1.0, 2.0, 30) he