Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Joe Landman
Patrick Geoffray wrote: I'll second this recommendation. The Coraid servers are fairly +1. The AoE spec is very simple, I wish it would have more traction outside CoRaID. On the opposite, iSCSI is a utter mess with all the bad -1 on the AoE initiator implementation. Seriously. We have c

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Joe Landman
Alex Chekholko wrote: Thanks for the pointers! I had never heard of AoE before! This is all well and good until you compare the prices of the respective solutions. E.g. what's the cheapest 5TB (usable) AoE box you can buy? I believe somewhat more than a relatively fast iSCSI/SRP/NFS/CIFS b

RE: [Beowulf] case (de)construction question

2010-02-19 Thread Lux, Jim (337C)
> -Original Message- > From: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org] On > Behalf Of David Mathog > Sent: Friday, February 19, 2010 3:44 PM > To: beowulf@beowulf.org > Subject: [Beowulf] case (de)construction question > > Many rack cases have threaded standoff's

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Alex Chekholko
On Fri, 19 Feb 2010 17:05:01 -0600 Rahul Nabar wrote: > On Fri, Feb 19, 2010 at 4:56 PM, Patrick Geoffray wrote: > > On 2/18/2010 2:26 PM, Jesse Becker wrote: > >> > >> On Thu, Feb 18, 2010 at 01:12:05PM -0500, Gerald Creager wrote: > >>> > >>> For what you're describing, I'd consider CoRAID's A

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Mark Hahn
I'm curious, what's the selling point for iSCSI then? The prices are quite ramped up and the performance not stellar. Do any of you in the HPC world buy i-SCSI at all? ease, I suppose. ethernet is omnipresent, so anything which uses ethernet has a big advantage. the sticking point is really t

[Beowulf] case (de)construction question

2010-02-19 Thread David Mathog
Many rack cases have threaded standoff's directly attached to the case metal. On the outside of the case one sees a hexagonal nut, and on the inside the cylindrical standoff - with no sign of the hexagonal nut. We even have one type of case with a removable motherboard tray, which is quite thin,

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Rahul Nabar
On Fri, Feb 19, 2010 at 11:29 AM, Mark Hahn wrote: Thanks Mark! > right - 10 years ago, the cost overhead of the system was larger. > nowadays, integration and moore's law has made small systems very cheap. > this is good, since disks are incredibly cheap as well. (bad if you're > in the storag

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Rahul Nabar
On Fri, Feb 19, 2010 at 4:56 PM, Patrick Geoffray wrote: > On 2/18/2010 2:26 PM, Jesse Becker wrote: >> >> On Thu, Feb 18, 2010 at 01:12:05PM -0500, Gerald Creager wrote: >>> >>> For what you're describing, I'd consider CoRAID's AoE technology and > >> I'll second this recommendation. The Coraid s

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Patrick Geoffray
On 2/18/2010 2:26 PM, Jesse Becker wrote: On Thu, Feb 18, 2010 at 01:12:05PM -0500, Gerald Creager wrote: For what you're describing, I'd consider CoRAID's AoE technology and I'll second this recommendation. The Coraid servers are fairly +1. The AoE spec is very simple, I wish it would have

RE: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Gilad Shainer
Nice to hear from you Greg, hope all is well. I don't forget anything, at least for now. OSU has different benchmarks so you can measure message coalescing or real message rate. Funny to read that Q hated coalescing when they created the first benchmark for that ...:-) but lets not argue on that.

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Greg Lindahl
On Fri, Feb 19, 2010 at 01:47:07PM -0500, Joe Landman wrote: > The big issue will be contention for the resource. Joe, What "the resource" is depends on implementation. All network cards have the limit of the line rate of the network. As far as I can tell, the Mellanox IB cards have a limited

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Greg Lindahl
> Mellanox latest message rate numbers with ConnectX-2 more than > doubled versus the old cards, and are for real message rate - > separate messages on the wire. The competitor numbers are with using > message coalescing, so it is not real separate messages on the wire, > or not really message rate

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Greg Lindahl
On Fri, Feb 19, 2010 at 01:25:07PM -0500, Brian Dobbins wrote: > I know Qlogic has made a big deal about the InfiniPath adapter's extremely > good message rate in the past... is this still an important issue? Yes, for many codes. If I recall stuff I published a while ago, WRF sent a surprising

RE: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Gilad Shainer
When you look on low level, marketing driven benchmarks, you should be careful. Mellanox latest message rate numbers with ConnectX-2 more than doubled versus the old cards, and are for real message rate - separate messages on the wire. The competitor numbers are with using message coalescing, so

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Michael Di Domenico
the folks on the linux-rdma mailing list can probably share some slides with you about app load over different cards. if you dont get a response, i can drop a few names of people who definitely have the info, but i dont want to do it at large on the list The last set of slides i can (thinking way

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Brian Dobbins
Hi Joe, I'm beginning to look into configurations for a new cluster and with the >> AMD 12-core and Intel 8-core chips 'here' (or coming soonish), I'm curious >> if anyone has any data on the effects of the messaging rate of the IB cards. >> With a 4-socket node having between 32 and 48 cores,

Re: [Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Joe Landman
Brian Dobbins wrote: Hi guys, I'm beginning to look into configurations for a new cluster and with the AMD 12-core and Intel 8-core chips 'here' (or coming soonish), I'm curious if anyone has any data on the effects of the messaging rate of the IB cards. With a 4-socket node having betwee

[Beowulf] Q: IB message rate & large core counts (per node)?

2010-02-19 Thread Brian Dobbins
Hi guys, I'm beginning to look into configurations for a new cluster and with the AMD 12-core and Intel 8-core chips 'here' (or coming soonish), I'm curious if anyone has any data on the effects of the messaging rate of the IB cards. With a 4-socket node having between 32 and 48 cores, lots of

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Mark Hahn
I was thinking SAS / SCSI / iSCSI is probably easiest and cheapest. the concept of scsi/sas being cheap is rather amusing. Do you already have a suitable SAS or SCSI controller in the host machine? ?If not, then you have to factor in the cost of the controller. No. true. I have to factor in

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Rahul Nabar
On Thu, Feb 18, 2010 at 1:55 PM, Alex Chekholko wrote: > On Thu, 18 Feb 2010 12:53:22 -0600 > Rahul Nabar wrote: > >> >> I was thinking SAS / SCSI / iSCSI is probably easiest and cheapest. > Do you already have a suitable SAS or SCSI controller in the host > machine?  If not, then you have to f

Re: [Beowulf] Any recommendations for a good JBOD?

2010-02-19 Thread Eugen Leitl
On Thu, Feb 18, 2010 at 12:53:22PM -0600, Rahul Nabar wrote: > On Thu, Feb 18, 2010 at 12:14 PM, Alex Chekholko wrote: > > On Thu, 18 Feb 2010 11:37:36 -0600 > > Does it need to be rack-mount?  What kind of interface? > > Preferably rack-mount. But cost is a compelling argument . I could be > con