Hi Mark,
Thanks for helping keep the FNN meme alive while I've been "away". :-)
On Fri, Jul 25, 2008 at 1:38 AM, Mark Hahn <[EMAIL PROTECTED]> wrote:
>> to generate a Universal FNN. FNNs don't really shine until you have 3 or
>> 4
>> NICs/HCAs per compute node.
>
> depends on costs. for instance
Optical links may bring things back toward larger switches.
optical increases costs, though. we just put in a couple long DDR
runs using Intel Connects cables, which work nicely, but are
noticably more expensive than copper ;)
although I give DDR and QDR due respect, I don't find many users
On Thu, Jul 24, 2008 at 06:39:00PM -0400, Mark Hahn wrote:
>
>> If you need 144 ports a single switch will be be more cost effective
>
> you'd think so - larger switches let you factor out lots of separate
> little power supplies, etc. not to mention transforming lots of cables
> into compact, r
I imagine a hybrid topology of certain sized subclusters connected
internally with a right topology for their size, and the subclusters
connected to each other with some other topology, etc. The way cores on a
chip are connected is different obviously from the way chips on a board, or
boards on a b
My plan (like an ant contemplating eating an elephant but...) is for the
self-adapting application to optimize not just itself (which it does
already) but it's platform (which is imaginable in a complex network). So
yes, I want intelligence at the node; I want a node to decide rationally
that certa
Mark Hahn wrote:
>> It is my sacred duty to rescue hypercube topology. Cool Preceeds
>> Coolant :-)
>
> I agree HC's are cool, but I think they fit only a narrow ecosystem:
> where you don't mind lots of potentially long wires, since higher
> dimensional
> fabrics are kind of messy in our low-dimen
It is my sacred duty to rescue hypercube topology. Cool Preceeds Coolant :-)
I agree HC's are cool, but I think they fit only a narrow ecosystem:
where you don't mind lots of potentially long wires, since higher dimensional
fabrics are kind of messy in our low-dimensional universe. also, HC's
a
On Thu, Jul 24, 2008 at 10:18:40PM -0700, Mark Hahn wrote:
> > This reminds me to ask about all the Xen questions Virtual machines
> > (sans dynamic migration) seem to address the inverse of the problem that
> > MPI and other computational clustering solutions address. Virtual machines
> > as
virtualization is a throughput thing.
Mark, Please can you clarify what you mean by 'throughput'
sorry, I don't whether the use of that term is widespread or not.
what I mean is that with some patterns of use, the goal is just
to jam through as many serial jobs per day, or to transfer
as man
On 7/24/08, Mark Hahn <[EMAIL PROTECTED]> wrote:
>
>
> that makes it sound like inter-node networks in general are doomed ;)
> while cores-per-node is increasing, users love to increase cores-per-job.
>
It is my sacred duty to rescue hypercube topology. Cool Preceeds Coolant :-)
Peter
> virtualization is a throughput thing.
Mark, Please can you clarify what you mean by 'throughput'
ta
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/
to generate a Universal FNN. FNNs don't really shine until you have 3 or 4
NICs/HCAs per compute node.
depends on costs. for instance, the marginal cost of a second IB port
on a nic seems to usually be fairly small. for instance, if you have
36 nodes, 3x24pt switches is pretty neat for 1 ho
This reminds me to ask about all the Xen questions Virtual machines
(sans dynamic migration) seem to address the inverse of the problem that
MPI and other computational clustering solutions address. Virtual machines
assume that the hardware is vastly more worthy than the OS and application
w
Cool, FNN's are still being mentioned on the Beowulf mailing list...
For those not familiar with the Flat Neighborhood Network (FNN) idea,
check out this URL: http://aggregate.org/FNN/
For those who haven't played with our FNN generator cgi script, do try
it out. Hank (my Ph.D. advisor) enhanced
On Thu, Jul 24, 2008 at 06:39:00PM -0400, Mark Hahn wrote:
...
>
>> Your point about "most people don't need" is important! With large
>> multi core, multiple socket systems external and internal bandwidth
>> can be interesting to ponder.
>
> that makes it sound like inter-node networks in ge
On Thu, Jul 24, 2008 at 08:14:43PM +0200, Jan Heichler wrote:
> 1) most applications are latency driven - not bandwidth driven.
As a guy who's a big fan of low latency, I had to say that this is not
a good generalization. Some apps become latency or message-rate
sensitive if you scale to enough n
Hi Mark,
Mark Hahn wrote:
With any network you need to avoid like the plauge any kind of loop,
they can cause weird problems and are pretty much unnessasary. for
well, I don't think that's true - the most I'd say is that given
It is kind of true for wormhole switches, you can deadlock if you
Hi Jan,
Jan Heichler wrote:
1) most applications are latency driven - not bandwidth driven. That
means that half bisectional bandwidth is not cutting your application
performance down to 50%. For most applications the impact should be less
than 5% - for some it is really 0%.
If the app is pu
If you need 144 ports a single switch will be be more cost effective
you'd think so - larger switches let you factor out lots of separate
little power supplies, etc. not to mention transforming lots of cables
into compact, reliable, cheap backplanes. but I haven't seen chassis
switches actual
On Thu, Jul 24, 2008 at 07:27:56PM +0100, andrew holway wrote:
> Sender: [EMAIL PROTECTED]
>
> > Your most cost effective solution will be a large port count switch.
> > Most are not 'ideal' but they are close to ideal and cost effective.
>
> That is not really the case in practice;
>
> You can
> Your most cost effective solution will be a large port count switch.
> Most are not 'ideal' but they are close to ideal and cost effective.
That is not really the case in practice;
You can buy a Mellanox 144-Port Modular InfiniBand DDR Switch
(60-Ports enabled) for around 22k EUR or so
the 24
:) me and jan work together at ClusterVision.
On Thu, Jul 24, 2008 at 7:14 PM, Jan Heichler <[EMAIL PROTECTED]> wrote:
> Hallo Daniel,
>
> Donnerstag, 24. Juli 2008, meintest Du:
>
> [network configurations]
>
> I have to say i am not sure that all the configs you sketched really work. I
> never s
Hallo Daniel,
Donnerstag, 24. Juli 2008, meintest Du:
[network configurations]
I have to say i am not sure that all the configs you sketched really work. I
never saw somebody creating loops in an IB fabric.
DP> Since I am not network expert I would be glad if somebody explains
DP> why the fir
Well the top configuration(and the one that I suggested) is the one
that we have tested and know works. We have implimented it into
hundereds of clusters. It also provides redundancy for the core
switches.
With any network you need to avoid like the plauge any kind of loop,
they can cause weird pr
On Thu, Jul 24, 2008 at 09:42:57AM -0700, Kilian CAVALOTTI wrote:
> On Thursday 24 July 2008 05:42:22 am andrew holway wrote:
> > To give a half bisectional bandwidth the best approach is to set up
> > two as core switches and the other 4 as edge switches.
> >
> > Each edge switch will have four co
Andrew,
Here are joined some possible topologies I was contemplating, with some
remarks about them. Many other topologies are possible.
The first one is the one you mention. If 12 nodes linked to one switch
communicate with 12 nodes on another switch the bandwidth is reduced to
8/12 = 2/3.
On Thursday 24 July 2008 05:42:22 am andrew holway wrote:
> To give a half bisectional bandwidth the best approach is to set up
> two as core switches and the other 4 as edge switches.
>
> Each edge switch will have four connections to each core switch
> leaving 16 node connections on each edge swi
Daniel
To give a half bisectional bandwidth the best approach is to set up
two as core switches and the other 4 as edge switches.
Each edge switch will have four connections to each core switch
leaving 16 node connections on each edge switch.
Should provide a 64 port network.
Make sense?
Ta
A
Hi,
I have the problem of connecting with InfiniBand 50 1-HCA nodes with 6
24-port switches. Several configurations may be imagined, but which one
is the best? What is the general method to solve such a problem?
Thanks,
Dan
___
Beowulf
29 matches
Mail list logo