At 16:07 28.11.2007, "Michael H. Frese" <[EMAIL PROTECTED]> wrote:
Oops, sorry. Early morning typing-while-sleeping.
The latencies claimed by Argonne for core-to-core
on-board communication with MPICH2 compiled using the ch3:nemesis
device are 0.3-0.5 microseconds, not 0.06. There's also no
Jeffrey B. Layton wrote:
amjad ali,
If you are going to use GigE (TCP), I would recommend Scali MPI
(www.scali.com). It's commercial, but it's the best MPI I've ever tested
(the fastest). Plus it works with TCP, Myrinet, and IB without having
to recompile.
If you don't want to pay money for an
Dear Jeff
> If you are going to use GigE (TCP), I would recommend Scali MPI
> (www.scali.com). It's commercial, but it's the best MPI I've ever tested
> (the fastest). Plus it works with TCP, Myrinet, and IB without having
> to recompile.
>
> If you don't want to pay money for an MPI, then go wit
Bernd Schubert wrote:
I don't know about other hardware raid cards, but for 3ware cards you should
always use a BBU, even if your system DOES have an UPS.
We did have the unlucky setup to use 9500S cards only for software raid. Now
after about 3 years in production usage the hard disks grew o
Charlie Peck wrote:
Unless you are using a gigabit ethernet, Open-MPI is noticeably less
efficient that LAM-MPI over that fabric.
I suspect at some point in the future gige will catch-up but for now my
(limited) understanding is that the Open-MPI folks are focusing their
time on higher band
Chris Samuel wrote:
> PGI 7.1-1 : pgcc -mp -fastsse -tp barcelona-64,k8-64 -Mipa=fast -o
> stream ./stream.c
I have found that PGI does great on dynamic arrays, but poorly on
static. Alas the inverse is true for pathscale, great on static
arrays but poorly on dynamic.
A 2.0 barcelona compiled w
On Nov 28, 2007, at 4:25 PM, Andrew Piskorski wrote:
On Wed, Nov 28, 2007 at 11:10:21AM -0500, Scott Atchley wrote:
At SC07 MPICH2 BoF, I gave a brief talk about MPICH2-MX. In addition
to showing results of it running over MX-10G, I had a few slides
showing performance using MPICH2-MX over Ope
On Wed, Nov 28, 2007 at 11:10:21AM -0500, Scott Atchley wrote:
> At SC07 MPICH2 BoF, I gave a brief talk about MPICH2-MX. In addition
> to showing results of it running over MX-10G, I had a few slides
> showing performance using MPICH2-MX over Open-MX on Intel e1000
> drivers (80003ES2LAN NI
Joe Landman <[EMAIL PROTECTED]> wrote
>We have been using some GAMESS runs for about 3 years now. Causes
> systems to generate MCEs at prodigious rates if the memory system is
> flaky.
I've started a thread in the memtest86+ forum here:
http://forum.x86-secret.com/showthread.php?t=7739
I've not tried their respective MPI libraries, but as a general rule, the
people who manufacture the chips have the best idea of how to optimize a
given library. (There are obvious counter-examples, gotoBLAS and fftw for
example).
That said, have you tried for Intel:
http://www.intel.com/cd/softw
On Nov 28, 2007, at 8:49 AM, Charlie Peck wrote:
On Nov 28, 2007, at 8:04 AM, Jeffrey B. Layton wrote:
Unless you are using a gigabit ethernet, Open-MPI is noticeably
less efficient that LAM-MPI over that fabric.
I suspect at some point in the future gige will catch-up but for
now my (lim
But the main point with MPI implementations, more than usual with
shared memory, is to run your application.
For 2 different MPI shared-memory implementations that show equal
performance on point-to-point microbenchmarks, you can measure very
different performance in applications (mostly at the ba
For the sake of others as easily confused as myself, I note (now, thanks!)
that OpenMP and OpenMPI are two different things:
OpenMP (an alternative to the MPI method) is
http://en.wikipedia.org/wiki/OpenMP
OpenMPI (an implementation of MPI) is http://en.wikipedia.org/wiki/OpenMPI
Cool.
Peter
On No
At 10:31 PM 11/27/2007, you wrote:
Hello,
Because today the clusters with multicore nodes are quite common
and the cores within a node share memory.
Which Implementations of MPI (no matter commercial or free), make
automatic and efficient use of shared memory for message passing
within a
At 10:31 PM 11/27/2007, you wrote:
Hello,
Because today the clusters with multicore nodes are quite common and
the cores within a node share memory.
Which Implementations of MPI (no matter commercial or free), make
automatic and efficient use of shared memory for message passing
within a no
On Nov 28, 2007, at 8:04 AM, Jeffrey B. Layton wrote:
If you don't want to pay money for an MPI, then go with Open-MPI.
It too can run on various networks without recompiling. Plus it's
open-source.
Unless you are using a gigabit ethernet, Open-MPI is noticeably less
efficient that LAM-MPI o
Because my target application is easy to distribute, and also tries to
optimize it's own operating environment (by fiddling with it's own
parameters), I'm thinking about using MPI for the case that a node wants to
specify a remote node to do a job (e.g., an underutilized node, or one that
has comm
amjad ali,
If you are going to use GigE (TCP), I would recommend Scali MPI
(www.scali.com). It's commercial, but it's the best MPI I've ever tested
(the fastest). Plus it works with TCP, Myrinet, and IB without having
to recompile.
If you don't want to pay money for an MPI, then go with Open-MPI
On Nov 28, 2007, at 12:31 AM, amjad ali wrote:
Hello,
Because today the clusters with multicore nodes are quite common
and the cores within a node share memory.
Which Implementations of MPI (no matter commercial or free), make
automatic and efficient use of shared memory for message passi
19 matches
Mail list logo