On Fri, Aug 04, 2006 at 08:52:06PM +0100, Vincent Diepeveen wrote:
> Read the subject. you posted onto this list answerring a private email of
> mine
> which i had shipped to the guy starting this thread.
Vincent, please go into the archive and find where *I* posted a
private email. *I* did not.
.
Original Message -
From: "Greg Lindahl" <[EMAIL PROTECTED]>
To: "Vincent Diepeveen" <[EMAIL PROTECTED]>
Cc:
Sent: Friday, August 04, 2006 4:43 PM
Subject: Re: Fw: [Beowulf] Correct networking solution for 16-core nodes
p.s. Don't ever post someone
Gilad,
>
> There was a nice debate on message rate, how important is this factor
> when you
> Want to make a decision, what are the real application needs, and if
> this is
> just a marketing propaganda. For sure, the message rate numbers that are
> listed
> on Greg web site regarding other int
p.s. Don't ever post someone's personal email to a mailing list
without asking. It's considered unethical by many on the Internet.
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beo
On Fri, Aug 04, 2006 at 11:35:56AM +0200, Joachim Worringen wrote:
> I think Vincent meant another latency, not the per-hop latency in the
> switches: the time to switch between different processes communicating
> to the NIC. I never heard of this latency being specified, nor being
> substantia
On Fri, Aug 04, 2006 at 04:36:08PM +0100, Vincent Diepeveen wrote:
> So for the replacement personnel of Greg,
> I understand that your cards can't interrupt at all.
> Users just have to wait until other messages have past the wire,
> before receiving a very short message (that for example aborts
From: "Greg Lindahl" <[EMAIL PROTECTED]>
To: "Joachim Worringen" <[EMAIL PROTECTED]>;
Sent: Thursday, August 03, 2006 10:07 PM
Subject: Re: [Beowulf] Correct networking solution for 16-core nodes
On Thu, Aug 03, 2006 at 12:53:40PM -0700, Greg Lindahl wrote:
ow
everything.
Now, as I said earlier, never email me personally.
-- greg
- Original Message -
From: "Vincent Diepeveen" <[EMAIL PROTECTED]>
To: "Greg Lindahl" <[EMAIL PROTECTED]>
Sent: Friday, August 04, 2006 11:19 AM
Subject: Re: [Beowulf] Correct ne
at clusters you don't have such problems which a single system image
has.
Vincent
- Original Message -
From: "Joachim Worringen" <[EMAIL PROTECTED]>
To:
Sent: Friday, August 04, 2006 10:35 AM
Subject: Re: [Beowulf] Correct networking solution for 16-core nodes
Greg Lindahl wrote:
Vincent wrote:
Only quadrics is clear about its switch latency (probably
competitors have a worse one). It's 50 us for 1 card.
We have clearly stated that the Mellanox switch is around 200 usec per
hop. Myricom's number is also well known.
I think Vincent meant another
On Thu, Aug 03, 2006 at 02:02:16PM -0700, Gilad Shainer wrote:
> There was a nice debate on message rate, how important is this
> factor when you Want to make a decision, what are the real
> application needs, and if this is just a marketing propaganda.
I would be happy to compare real applicatio
Gilad Shainer wrote:
There was a nice debate on message rate, how important is this factor
when you
Want to make a decision, what are the real application needs, and if
this is
just a marketing propaganda. For sure, the message rate numbers that are
listed
on Greg web site regarding other inte
>> From the numbers published by Pathscale, it seems that the simple MPI
>> latency of Infinipath is about the same whether you go via PCIe or
HTX.
>> The application perfomance might be different, though.
>
> No, our published number is 1.29 usec for HTX and 1.6-2.0 usec for PCI
> Express. It's
Only quadrics is clear about its switch latency (probably
competitors have a worse one). It's 50 us for 1 card.
We have clearly stated that the Mellanox switch is around 200 usec per
hop. Myricom's number is also well known.
Greg,
Don't you mean 200 nanosec? Which is about the same for the M
On Thu, Aug 03, 2006 at 06:02:50PM -0400, Robert G. Brown wrote:
>
See, just like Kibo, you mention him, he appears!
-- g
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.
On Thu, 3 Aug 2006, Greg Lindahl wrote:
On Thu, Aug 03, 2006 at 12:53:40PM -0700, Greg Lindahl wrote:
Poor scaling as nodes get faster
Er, "fatter". I made the mistake of teasing one of my co-workers for
having not spotted the usec/nsec mistake, so he has to point this one
out.
Apparently I
On Thu, Aug 03, 2006 at 12:53:40PM -0700, Greg Lindahl wrote:
> Poor scaling as nodes get faster
Er, "fatter". I made the mistake of teasing one of my co-workers for
having not spotted the usec/nsec mistake, so he has to point this one
out.
Apparently I'm replacing rgb while he's on vacation. Al
On Thu, Aug 03, 2006 at 12:53:40PM -0700, Greg Lindahl wrote:
> We have clearly stated that the Mellanox switch is around 200 usec per
> hop. Myricom's number is also well known.
Er, 200 micro seconds. Y'all know what I meant, right? :-)
-- greg
___
On Thu, Aug 03, 2006 at 11:19:44AM +0200, Joachim Worringen wrote:
> From the numbers published by Pathscale, it seems that the simple MPI
> latency of Infinipath is about the same whether you go via PCIe or HTX.
> The application perfomance might be different, though.
No, our published number
Tahir Malas wrote:
-Original Message-
From: Vincent Diepeveen [mailto:[EMAIL PROTECTED]
[...]
Quadrics can work for example as direct shared memory among all
nodes when you program for its shmem, which means that for short
messages you can simply share from its 64MB ram on card somethin
> -Original Message-
> From: Vincent Diepeveen [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, August 02, 2006 12:42 PM
> To: Tahir Malas
> Subject: Re: [Beowulf] Correct networking solution for 16-core nodes
>
> Hi Tahir,
>
> Perhaps you can describe to the
With your previous suggestions 8 months ago we bought a Tyan S4881 server
with 8 dual-core Opteron CPUs with 64GB RAM. Now we will buy new ones (2
more for the time being) and we will eventually planning to form a cluster
from these servers, which will have at most 8 boxes. Now, as you guess, the
Hi All,
With your previous suggestions 8 months ago we bought a Tyan S4881 server
with 8 dual-core Opteron CPUs with 64GB RAM. Now we will buy new ones (2
more for the time being) and we will eventually planning to form a cluster
from these servers, which will have at most 8 boxes. Now, as you gues
23 matches
Mail list logo