Re: [Beowulf] programming multicore clusters

2007-06-13 Thread Greg Lindahl
On Wed, Jun 13, 2007 at 07:29:29AM -0700, Joseph Mack NA3T wrote: > "Most of the folks interested in hybrid models a few years > ago have now given it up". > > I assume this was from the era of 2-way SMP nodes. No, the main place you saw that style was on IBM SPs with 8+ cores/node. > I expect

RE: [Beowulf] Two problems related to slowness and TASK_UNINTERRUPTABLE process

2007-06-13 Thread Tahir Malas
> -Original Message- > From: Mark Hahn [mailto:[EMAIL PROTECTED] > Sent: Tuesday, June 12, 2007 6:15 PM > To: Tahir Malas > Cc: [EMAIL PROTECTED]; beowulf@beowulf.org; > [EMAIL PROTECTED]; 'Ozgur Ergul' > Subject: Re: [Beowulf] Two problems related to slowness and > TASK_UNINTERRUPTABLE pr

[Beowulf] network raid filesystem

2007-06-13 Thread Farkas Levente
hi, we've a few 10-20 server in a lan each has 4 hdd. we'd like to create one big filesystem on these server hard disks. we'd like to create it in a redundant way ie: - if one (or more) of the hdd or server fails the whole filesystem still usable and consistent. - any server in this farm can see th

[Beowulf] programming multicore clusters

2007-06-13 Thread Joseph Mack NA3T
I've googled the internet and searched the Beowulf archives for "hybrid" || "multicore" and the only definitive statement I've found is by Greg Lindahl, 17 Dec 2004 "Most of the folks interested in hybrid models a few years ago have now given it up". I assume this was from the era of 2-way S

Re: [Beowulf] MPI performance gain with jumbo frames

2007-06-13 Thread Paulo Afonso Lopes
I can report a decrease of circa 10% CPU use per GbE link in an IBM x335 (dual Xeon 2.6GHz) with on-board Broadcom NICs and SMC switch, when going from standard 1500 to 9K frames on the netperf benchmark, at full bandwidth (circa 80MB/s). Best Regards, paulo > Doug and Jeff have good points (and

Re: [Beowulf] MPI performance gain with jumbo frames

2007-06-13 Thread Greg Lindahl
On Wed, Jun 13, 2007 at 04:30:16PM -0700, [EMAIL PROTECTED] wrote: > In a multi-core situation, > do the interrupts affect all of the cores or just one core? One core gets each interrupt. cat /proc/interrupts to see how this works in your system. > I personally like the concept that Level 5 Netw

Re: [Beowulf] MPI performance gain with jumbo frames

2007-06-13 Thread Greg Lindahl
On Wed, Jun 13, 2007 at 07:02:10PM -0400, Douglas Eadline wrote: > So this begs the question, if we are "core rich and packet small" > do we care about packet size and overhead? That's not quite the question. In many programs, there is no possible overlap between communication and computation, so

Re: [Beowulf] MPI performance gain with jumbo frames

2007-06-13 Thread laytonjb
More questions: One of the purposes of interrupt coalescence is to reduce the load on the CPU by ganging interrupt requests together (sorry for all of the technical jargon there). In a multi-core situation, do the interrupts affect all of the cores or just one core? If the interrupts affect all o

Re: [Beowulf] MPI performance gain with jumbo frames

2007-06-13 Thread Douglas Eadline
So this begs the question, if we are "core rich and packet small" do we care about packet size and overhead? In other words if we have plenty of cores when do we not care about communication overhead. Most GigE drivers have various interrupt coalescence strategies and of course Jumbo Frames to le

Re: [Beowulf] MPI performance gain with jumbo frames

2007-06-13 Thread Bill Rankin
Doug and Jeff have good points (and some good links). On thing to also pay attention to is the CPU utilization during the bandwidth and application testing. We found that on our cluster (various Dells with built in GigE NICs) while we did not see huge differences in effective bandwidth, t