Hi Mark,
Thanks for helping keep the FNN meme alive while I've been "away". :-)
On Fri, Jul 25, 2008 at 1:38 AM, Mark Hahn <[EMAIL PROTECTED]> wrote:
>> to generate a Universal FNN. FNNs don't really shine until you have 3 or
>> 4
>> NICs/HCAs per compute node.
>
> depends on costs. for instance
Optical links may bring things back toward larger switches.
optical increases costs, though. we just put in a couple long DDR
runs using Intel Connects cables, which work nicely, but are
noticably more expensive than copper ;)
although I give DDR and QDR due respect, I don't find many users
On Thu, Jul 24, 2008 at 06:39:00PM -0400, Mark Hahn wrote:
>
>> If you need 144 ports a single switch will be be more cost effective
>
> you'd think so - larger switches let you factor out lots of separate
> little power supplies, etc. not to mention transforming lots of cables
> into compact, r
Hello everybody,
I observed the following problem:
Usually, on the nodes we have 4 rpciod processes running: [rpciod/0]
- [rpciod/3]
I assume, the squared brackets mean that these are kernel processes.
>From time to time one of them changes in to the
'D' (Uninterruptible sleep) mode.
Once it
I imagine a hybrid topology of certain sized subclusters connected
internally with a right topology for their size, and the subclusters
connected to each other with some other topology, etc. The way cores on a
chip are connected is different obviously from the way chips on a board, or
boards on a b
My plan (like an ant contemplating eating an elephant but...) is for the
self-adapting application to optimize not just itself (which it does
already) but it's platform (which is imaginable in a complex network). So
yes, I want intelligence at the node; I want a node to decide rationally
that certa
Mark Hahn wrote:
>> It is my sacred duty to rescue hypercube topology. Cool Preceeds
>> Coolant :-)
>
> I agree HC's are cool, but I think they fit only a narrow ecosystem:
> where you don't mind lots of potentially long wires, since higher
> dimensional
> fabrics are kind of messy in our low-dimen
It is my sacred duty to rescue hypercube topology. Cool Preceeds Coolant :-)
I agree HC's are cool, but I think they fit only a narrow ecosystem:
where you don't mind lots of potentially long wires, since higher dimensional
fabrics are kind of messy in our low-dimensional universe. also, HC's
a
On Thu, Jul 24, 2008 at 10:18:40PM -0700, Mark Hahn wrote:
> > This reminds me to ask about all the Xen questions Virtual machines
> > (sans dynamic migration) seem to address the inverse of the problem that
> > MPI and other computational clustering solutions address. Virtual machines
> > as
virtualization is a throughput thing.
Mark, Please can you clarify what you mean by 'throughput'
sorry, I don't whether the use of that term is widespread or not.
what I mean is that with some patterns of use, the goal is just
to jam through as many serial jobs per day, or to transfer
as man
On 7/24/08, Mark Hahn <[EMAIL PROTECTED]> wrote:
>
>
> that makes it sound like inter-node networks in general are doomed ;)
> while cores-per-node is increasing, users love to increase cores-per-job.
>
It is my sacred duty to rescue hypercube topology. Cool Preceeds Coolant :-)
Peter
On Thu, Jul 24, 2008 at 03:55:51PM -0700, Greg Lindahl wrote:
> Fiber is a commodity. Perhaps you were looking for pricing close
> enough to twisted pair copper? In any case, it's not just the cost per
> length of cable, the endpoints for fiber are also more expensive.
Right now you need GBICs,
> virtualization is a throughput thing.
Mark, Please can you clarify what you mean by 'throughput'
ta
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/
13 matches
Mail list logo