Hi Chris,
I'm not an expert on the Mellanox IB implementation, or MVAPICH, and I
won't try to be. Gilead/someone else from Mellanox might be able to
give you more specific information, or maybe the OSU guys if you email
mvapich-discuss.
I have two guesses, based on things I've seen on a rang
ry
> University of North Carolina at Greensboro
> 435 New Science Bldg.
> Greensboro, NC 27402-6170
> [EMAIL PROTECTED]
> [EMAIL PROTECTED]
> 336-334-4766 lab
> 336-334-5122 office
> 336-334-5402 fax
>
> -Jim Phillips <[EMAIL PROTECTED]> wrote: -
>
>
Hi Dow,
The QLE7240 DDR HCA is not available yet, but we do not expect that it
would have any substantial advantage on NAMD as compared to the QLE7140
(SDR), because we don't believe that NAMD requires substantial pt to pt
bandwidth from the interconnect.
The TACC cluster is not using QLogic
t;
>
> Gilad.
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> On Behalf Of Kevin Ball
> Sent: Thursday, July 19, 2007 11:52 AM
> To: Brian Dobbins
> Cc: beowulf@beowulf.org
> Subject: Re: [Beowulf] MPI2007 out - strange pop2 resu
Hi Brian,
The benchmark 121.pop2 is based on a code that was already important
to QLogic customers before the SPEC MPI2007 suite was released (POP,
Parallel Ocean Program), and we have done a fair amount of analysis
trying to understand its performance characteristics. There are three
things t
Wow... they require, on every node:
Java Runtime Environment
Perl
Python
Tcl
Kitchen Sink*
*(Okay, only figuratively)
But I guess we already knew 'lean and mean' is not something Intel
thinks about very often.
-Kevin
On Wed, 2007-06-27 at 06:30, Douglas Eadline wrote:
> Intel has announced th
On Tue, 2007-04-24 at 08:55, Ashley Pittman wrote:
> On Sat, 2007-04-21 at 13:16 +0200, HÃ¥kon Bugge wrote:
> > PIO is a term with an two different
> > interpretations. For a shared address space NIC,
> > such as Dolphin's SCI adapters, PIO implies a
> > sender CPU to write data directly into the
Hi Mark,
>
> > On IB - nfs works only with IPoIB, whereas glusterfs does SDP (and ib-verbs,
> > from the source repository) and is clearly way faster than NFS.
>
> "clearly"s like that make me nervous. to an IB enthusiast, SDP may be
> more aesthetically pleasing, but why do you think IPoIB sh
On Thu, 2007-01-18 at 21:31, Jim Lux wrote:
> At 04:41 PM 1/18/2007, Robert G. Brown wrote:
> >On Thu, 18 Jan 2007, Jim Lux wrote:
> >
> >>And likewise, WinXP on the desktop. A company with 20,000 WinXP
> >>desktops cannot tolerate BSODs and mystery hangs on a significant
> >>fraction of those d
Hi Jeff,
On Fri, 2006-10-06 at 12:50, Jeffrey B. Layton wrote:
> Afternoon cluster fans,
>
> I'm working with a CFD code using the PGI 6.1 compilers and
> MPICH-1.2.7. The code runs fine for a while but I get an error
> message that I've never seen before:
>
>
> [2] MPI Internal Aborting prog
Hi Lai,
On Thu, 2006-09-21 at 00:03, Lai Dragonfly wrote:
> Dear all,
>
> I'm doing the HPL benchmark on several nodes of AMD opteron platform
> but may expand to hundred of nodes later.
> i hope to get around 80% efficiency.
> Anyone has good suggestions of combination of compiler (appreciate i
Gilad,
>
> There was a nice debate on message rate, how important is this factor
> when you
> Want to make a decision, what are the real application needs, and if
> this is
> just a marketing propaganda. For sure, the message rate numbers that are
> listed
> on Greg web site regarding other int
On Wed, 2006-06-28 at 13:41, Erik Paulson wrote:
> On Wed, Jun 28, 2006 at 04:25:40PM -0400, Patrick Geoffray wrote:
> >
> > I just hope this will be picked up by an academic that can convince
> > vendors to donate. Tax break is usually a good incentive for that :-)
> >
>
> How much care should
Patrick,
Thank you for the rapid and thoughtful response,
On Wed, 2006-06-28 at 11:23, Patrick Geoffray wrote:
> Hi Kevin,
>
> Kevin Ball wrote:
> > Patrick,
> >
> >>
> >> From you flawed white papers, you compared your own results against
> &
> - *If* you feel you need to use such a new metric for whatever reason, you
> should at least publish the benchmark that is used to gather these numbers to
> allow others to do comparative measurements. This goes to Greg.
This has been done. You can find the benchmark used for message rate
me
Patrick,
>
> From you flawed white papers, you compared your own results against
> numbers picked from the web, using older interconnect with unknown
> software versions.
I have spent many hours searching to try to find application results
with newer Myrinet and Mellanox interconnects. I
Mathieu,
On Fri, 2006-06-23 at 04:38, mg wrote:
> Hello,
>
> Traditionally, a parallel application is run like following
> >>> export FOO=foo
> >>> mpirun -np 2 -machinefile mymachinefile my_parallel_app [app options]
> (To be known by all the nodes of my cluster, the environment variable
> FOO
On Sat, 2006-06-17 at 11:34, Mark Hahn wrote:
> > >> desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS
> > >> and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware
> > > ...
> > >> As a point of reference, a quad opteron 270 (2GHz) reported
> > >> 4.31 GROMACS GFLOPS.
> > >
> >
Hi Igor,
On Wed, 2006-05-03 at 12:19, Kozin, I (Igor) wrote:
> Hello Kevin,
> interesting that you said that.
> We are in the process of developing a database for
> application benchmark results because we produce quite a bit of data
> and only a small fraction goes into our reports.
> The databa
Hi Patrick,
On Wed, 2006-05-03 at 01:54, Patrick Geoffray wrote:
> Vincent,
>
> Vincent Diepeveen wrote:
> > Just measure the random ring latency of that 1024 nodes Myri system and
> > compare.
> > There is several tables around with the random ring latency.
> >
> > http://icl.cs.utk.edu/hpcc/h
One thing to note here is that the pay scales in IT are at least
somewhat merit based. If you do very well, you will climb the pay
scales. In teaching, they are largely seniority based. You get put on
a certain initial starting point based on how much schooling you have,
and then your pay rises
computation, though there were other later (optional)
courses that used C or Fortran, but those languages were picked up
somewhat as needed rather than formally taught.
Kevin Ball
PathScale, Inc.
>
> I went through an exercise to try and hire a good young engineer a few
> years ago. I wanted
>
> I've seen architectures with two network switchs, one is used for I/O
> (writing, reading, so on) and another for message passing (MPI). how is
> this achieved? I get the idea, from one place, where the applications
> running must be aware of this but I was thinking that for this to work
I've been bitten by this a little bit before too. Some additional
things to try besides what rgb said:
add your public key to the file .ssh/authorized_keys
e.g.
> cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
make sure that both authorized_keys and your private key are not
readable by others b
24 matches
Mail list logo