Sabuj Pattanayek wrote:
> But this is not what I'm seeing as in the test above. Shouldn't I be
> able to get >> 1gbps with balance-alb mode from multiple TCP streams?
> It looks like all the connections are bunching up on one interface. If
> for whatever reason what I'm trying to do isn't possible
Yes, Intel has provided seed units to system integrators and software
developers so they can start system and code testing.
However, those samples are covered by NDA that does not allow
performance measurements to be published, including comparisons with
competing products.
Only Intel can provide
All,
In early descriptions of QPI, this capability (a remote QPI agent's [on say an
FPGA or GPU
accelerator] ability to stimulate a QPI processor to prefetch data to its
cache, avoiding memory),
was listed as a possible feature of QPI. This has obvious potential
benefits in globalizin
Hi,
I had posted this to the gentoo-cluster list, was told to look in the
archives here and after search for bonding and "balance-alb" in the
beowulf archives found no really clear answers regarding balance-alb
and multiple TCP connections. Furthermore bonding.txt is really
confusing, using terms
Chris,
Thank you for your comments. I just want to clarify that the SOLiD
system from Applied Biosystems ships with Scyld ClusterWare not Rocks.
Regards
Arend
-Original Message-
From: Chris Dagdigian [mailto:d...@sonsorol.org]
Sent: Tuesday, February 17, 2009 11:52 AM
To: Michael Will
> Most blade servers that I've looked at cost more than 1U servers.
> That's not very attractive. Then again, I buy 2Us mostly because I
> don't need extreme power density anyway.
The catch is that blade servers are less expensive if you compare 1Us
and blades from the same vendor and as long as y
On Fri, Feb 20, 2009 at 10:22 AM, Reuti wrote:
> OTOH: Once I looked into Torque and found, that with "nodes=2:ppn=2" I got
> just one node with 4 slots in total of course. I don't know, whether it can
> still happen to get such a distribution.
>
This confused me for awhile. Torque use implies a
Chris,
Thank you for your comments. I just want to clarify that the SOLiD
system from Applied Biosystems ships with Scyld ClusterWare not Rocks.
Regards
Arend
-Original Message-
From: Chris Dagdigian [mailto:d...@sonsorol.org]
Sent: Tuesday, February 17, 2009 11:52 AM
To: Michael Will
Lux, James P wrote:
Depends on what your local climate is. If you are in Cleveland
(where it is -3C (27F) right now), it might be nice to have. Otoh, in
Perth, where 36C is forecast, perhaps not.
Mark is on (roughly) the north short of the lake that Cleveland is on
the south shore of ...
On Fri, Feb 20, 2009 at 03:59:44PM -0500, Mark Hahn wrote:
>> New. Shiny. http://www.supermicro.com/products/nfo/2UTwin2.cfm
>
> I'm a bit puzzled. cpu/mem density is the same as for their older
> dual-node-1U line. I guess it's handy to have 3 normal disks per node.
Did you notice the recent
> -Original Message-
> From: beowulf-boun...@beowulf.org
> [mailto:beowulf-boun...@beowulf.org] On Behalf Of Mark Hahn
> Sent: Friday, February 20, 2009 1:00 PM
> To: John Hearns
> Cc: Beowulf Mailing List
> Subject: Re: [Beowulf] Supermicro 2U
>
> > New. Shiny. http://www.supermicro.com
Am 20.02.2009 um 15:37 schrieb Bogdan Costescu:
On Fri, 20 Feb 2009, Glen Beane wrote:
I looked into SGE a long time ago, but I found the MPI support
terrible when compared to TORQUE/PBS Pro
Indeed and AFAIK is still in a similar state today. There was talk
for a long time on the SGE deve
New. Shiny. http://www.supermicro.com/products/nfo/2UTwin2.cfm
I'm a bit puzzled. cpu/mem density is the same as for their older
dual-node-1U line. I guess it's handy to have 3 normal disks per node.
The nice thing is that the motherboards are hot-swap from the rear -
so these things
hmm
Am 20.02.2009 um 16:23 schrieb Prentice Bisbal:
Bogdan Costescu wrote:
On Fri, 20 Feb 2009, Glen Beane wrote:
I looked into SGE a long time ago, but I found the MPI support
terrible when compared to TORQUE/PBS Pro
Indeed and AFAIK is still in a similar state today. There was talk
for a
lo
Am 20.02.2009 um 16:57 schrieb Bogdan Costescu:
On Fri, 20 Feb 2009, Reuti wrote:
OTOH: Once I looked into Torque and found, that with
"nodes=2:ppn=2" I got just one node with 4 slots in total of
course. I don't know, whether it can still happen to get such a
distribution.
Most likey yo
Bogdan Costescu wrote:
On Fri, 20 Feb 2009, Prentice Bisbal wrote:
You need to take a fresh look at SGE and Open MPI.
Well, I'm subscribed to the devel lists of both these projects so I
don't really think that I need to take a fresh look at them :-)
Open MPI seems to be the new de facto s
On Fri, 20 Feb 2009, Prentice Bisbal wrote:
You need to take a fresh look at SGE and Open MPI.
Well, I'm subscribed to the devel lists of both these projects so I
don't really think that I need to take a fresh look at them :-)
Open MPI seems to be the new de facto standard MPI library
Wh
It is a tad lame to repeat articles from HPCwire here, but I can't help it.
New. Shiny. http://www.supermicro.com/products/nfo/2UTwin2.cfm
The nice thing is that the motherboards are hot-swap from the rear -
so these things
offer the best of both worlds if blade servers and discreet rackable serve
On 2/20/09 10:22 AM, "Reuti" wrote:
Am 20.02.2009 um 14:36 schrieb Prentice Bisbal:
> Bill Broadley wrote:
>> SGE does seem the default for
>> many small/medium clusters these days (and is the Rocks default)
>> but does make
>> some things strangely hard, usually with a work around though. I
On Fri, 20 Feb 2009, Reuti wrote:
OTOH: Once I looked into Torque and found, that with "nodes=2:ppn=2"
I got just one node with 4 slots in total of course. I don't know,
whether it can still happen to get such a distribution.
Most likey you were using Maui or Moab and something else than:
JOB
On 2/20/09 10:23 AM, "Prentice Bisbal" wrote:
Bogdan Costescu wrote:
> On Fri, 20 Feb 2009, Glen Beane wrote:
>
>> I looked into SGE a long time ago, but I found the MPI support
>> terrible when compared to TORQUE/PBS Pro
>
> Indeed and AFAIK is still in a similar state today. There was talk f
Does someone know whether Intel has already started shipping Nehalem Xeon
55** processors? Are there system integrators that have Nehalem servers
already available for customers?
How these Xeon 55** are compared to latest AMD Shanghais (clock by clock,
performace/price)?
Ivan
Reuti wrote:
> Am 20.02.2009 um 14:36 schrieb Prentice Bisbal:
>
>> Bill Broadley wrote:
>>> SGE does seem the default for
>>> many small/medium clusters these days (and is the Rocks default) but
>>> does make
>>> some things strangely hard, usually with a work around though. In
>>> particular
>>
Bogdan Costescu wrote:
> On Fri, 20 Feb 2009, Glen Beane wrote:
>
>> I looked into SGE a long time ago, but I found the MPI support
>> terrible when compared to TORQUE/PBS Pro
>
> Indeed and AFAIK is still in a similar state today. There was talk for a
> long time on the SGE devel list for a TM A
Am 20.02.2009 um 14:36 schrieb Prentice Bisbal:
Bill Broadley wrote:
SGE does seem the default for
many small/medium clusters these days (and is the Rocks default)
but does make
some things strangely hard, usually with a work around though. In
particular
I find the lack of a straight forwa
On Fri, 20 Feb 2009, Glen Beane wrote:
I looked into SGE a long time ago, but I found the MPI support
terrible when compared to TORQUE/PBS Pro
Indeed and AFAIK is still in a similar state today. There was talk for
a long time on the SGE devel list for a TM API to be added, but it
seems like
Bill Broadley wrote:
> SGE does seem the default for
> many small/medium clusters these days (and is the Rocks default) but does make
> some things strangely hard, usually with a work around though. In particular
> I find the lack of a straight forward way to handle requesting nodes and
> processo
On 2/19/09 4:28 PM, "Bill Broadley" wrote:
Granted this was 5+ years ago, I assume things are better now, and have heard
quite a few good things about torque lately. SGE does seem the default for
many small/medium clusters these days (and is the Rocks default) but does make
some things strang
28 matches
Mail list logo