Hallo Gilad,
Donnerstag, 12. Juni 2008, meintest Du:
>
What is the chipset that you have?
MCP55 by Nvidia.
OFED 1.3 and MVAPICH2 1.03 and 1.02 tested.
Regards,
Jan
>
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jan Heichler
Sent: Thursday, June 12, 2008
Hallo Tom,
Freitag, 13. Juni 2008, meintest Du:
>
So you're concerned with the gap between the 2.63 us that OSU measured and your 3.07 us you measured. I wouldn't be too concerned.
1st: i get a value of 2.96 with MVAPICH 1.0.0 - this is exactly the value that i find on the mvapich
Dear all:
Thanks for all the responses. I was at the Roadrunner booth at SC07.
They had a handout explaining the Roadrunner architecture which also
has a picture of racks of blades (maybe not of Roadrunner, but blades
nevertheless). If I remember correctly they even have the blades on
display.
You can check out the following:
http://linux-mm.org/LinuxMM
Guilherme Menegon Arantes wrote:
On Tue, Jun 10, 2008 at 06:13:00AM -0700, [EMAIL PROTECTED] wrote:
Date: Tue, 10 Jun 2008 00:58:12 -0400 (EDT)
From: Mark Hahn <[EMAIL PROTECTED]>
Subject: Re: [Beowulf] size of swap partition
To:
Unfortunately the kernel implementation of mmap() doesn't check
the maximum memory size (RLIMIT_RSS) or maximum data size (RLIMIT_DATA)
limits which were being set, but only the maximum virtual RAM size
(RLIMIT_AS) - this is documented in the setrlimit(2) man page.
:-(
I think it's a perfectly
So you're concerned with the gap between the 2.63 us that OSU measured
and your 3.07 us you measured. I wouldn't be too concerned.
MPI latency can be quite dependent on the systems you use. OSU used
dual-processor 2.8 Ghz processors. Such as system has ~60 ns latency to
local memory. On your
Dear all!
I found this
http://mvapich.cse.ohio-state.edu/performance/mvapich2/opteron/MVAPICH2-opteron-gen2-DDR.shtml
as reference value for MPI-latency of Infiniband. I try to reproduce those
numbers at the moment but i'm stuck with
# OSU MPI Latency Test v3.0
# SizeLatency (us)
All,
Not a expert, but I know a thing or two. The triblade is two CB2 blades
which each hold each two PowerXCell processors in a cc-NUMA arrangement.
They sandwch a LS21 blade that is connected to each through a 16x PCIe to HT
bridge. These three are uni-body constructed. The CB2s resemble the QS
Bernard Li wrote:
> Hi all:
>
> I am sure most people have seen the following picture for Roadrunner
> circulating the Net:
>
> http://www.cnn.com/2008/TECH/06/09/fastest.computer.ap/index.html?iref=newssearch
>
> However, they don't look likes blades to me, more like 2U IBM x series
> servers.
Bernard,
I'm looking forward to hearing from our resident experts, but
meanwhile: http://en.wikipedia.org/wiki/IBM_Roadrunner exlains the
architecture some. The buzzword is "triblade", which is 3 blades (with an
extension) employing two types of processors (AMD Opteron and IBM Cell) in
a
Also at ComputerWorld:
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9085021&intsrc=news_ts_head
On Thu, 2008-06-12 at 12:45 -0700, Bernard Li wrote:
> Hi all:
>
> I am sure most people have seen the following picture for Roadrunner
> circulating the Net:
>
Bernard Li wrote:
> Hi all:
>
> I am sure most people have seen the following picture for Roadrunner
> circulating the Net:
>
> http://www.cnn.com/2008/TECH/06/09/fastest.computer.ap/index.html?iref=newssearch
>
> However, they don't look likes blades to me, more like 2U IBM x series
> servers.
Chris Samuel wrote:
>
> Unfortunately the kernel implementation of mmap() doesn't check
> the maximum memory size (RLIMIT_RSS) or maximum data size (RLIMIT_DATA)
> limits which were being set, but only the maximum virtual RAM size
> (RLIMIT_AS) - this is documented in the setrlimit(2) man page.
>
Hi all:
I am sure most people have seen the following picture for Roadrunner
circulating the Net:
http://www.cnn.com/2008/TECH/06/09/fastest.computer.ap/index.html?iref=newssearch
However, they don't look likes blades to me, more like 2U IBM x series
servers. Perhaps those are the I/O nodes?
C
Yes, what are currently shipping from AMD are B3 revision processors. The
TLB-look-aside problem is fixed.
There are other less-critical problems with B3, however. Specifically,
power-related compatibility issues with various motherboards due to
(according to the motherboard manufacturers) AMD
> Mellanox announced the availability of the switch asic this week, and
> can provide switch evaluation kits (36 port box and adapters with IB QDR
> capability) now. My estimation is that the production switches will be
> out Q3.
Which vendor?
___
Beowul
> +1 for the 24 port flextronics switches. They are very cost effective
> for half bisectional networks upto 32 ports. It starts to get
> messy after that.
>
> I wonder how long we will be waiting for switches based on
> the 36p asic?
>
Mellanox announced the availability of the switch asic
All,
I have not been able to get an exact answer to this question. The older
chip, while much slower in double-precision was fully IEEE compliant
I am fairly sure.
I believe that IBM has improved the compliance of single-precision
in the PowerXCell (although it is still not fully compliant), but
+1 for the 24 port flextronics switches. They are very cost effective
for half bisectional networks upto 32 ports. It starts to get messy
after that.
I wonder how long we will be waiting for switches based on the 36p asic?
On Thu, Jun 12, 2008 at 4:08 PM, Don Holmgren <[EMAIL PROTECTED]> wrote:
>
Ramiro -
You might want to also consider buying just a single 24-port switch for your 22
nodes, and then when you expand either replace with a larger switch, or build a
distributed switch fabric with a number of leaf switches connecting into a
central spine switch (or switches). By the time
Ramiro Alba Queipo wrote:
Hello everybody:
We are about to build an HPC cluster with infiniband network starting
from 22 dual socket nodes with AMD QUAD core processors and in a year or
so we will be having about 120 nodes. We will be using infiniband both
for calculation as for storage.
Hi Ra
On Tue, Jun 10, 2008 at 06:13:00AM -0700, [EMAIL PROTECTED] wrote:
>
> Date: Tue, 10 Jun 2008 00:58:12 -0400 (EDT)
> From: Mark Hahn <[EMAIL PROTECTED]>
> Subject: Re: [Beowulf] size of swap partition
> To: Gerry Creager <[EMAIL PROTECTED]>
> Cc: Mikhail Kuzminsky <[EMAIL PROTECTED]>, beowulf@beowu
Hello everybody:
We are about to build an HPC cluster with infiniband network starting
from 22 dual socket nodes with AMD QUAD core processors and in a year or
so we will be having about 120 nodes. We will be using infiniband both
for calculation as for storage.
The question is that we need a modu
Hi All,
I have an issue with a new cluster setup where the nodes are RHEL5.1(with
the latest 5.2 kernel), when i try to write NFS data, the nodes scale
linearly until they reach the 10th node, that is the bandwidth , and
throughput seen from the NFS sever on the other side of the nodes shows a
lin
Chris Samuel wrote:
>- [EMAIL PROTECTED] wrote:
>> All head nodes should have the BIOS set to localboot first.
>
>We set the interface on the internal cluster network to
>PXE and the external to not.
I agree.
but note that if you use ROCKS, it insists on the other way round:
It wants to alw
25 matches
Mail list logo