Hi Tim
Tim Wilcox wrote:
I have been coding on the Cell for just over a month now. Nothing serious,
just getting subroutines to run on a single SPU. This turns out to be very
easy. Now I am looking at parallel programming between the SPUs and this
seems much more difficult. The API, as far
All the real applications performance that I saw show that IB 10
and 20Gb/s provide much higher performance results comparing to
your 2G, and clearly better price/performance. This is a good
_all_? that's patently absurd. I don't think there's anything wrong
with presenting one's own products
On Fri, 23 Mar 2007, Gilad Shainer wrote:
> Are you selling Myricom HW or Qlogic HW?
Based on what I know, I think it's perfectly reasonable for Patrick
to expect that a messaging technology can outdo the other for reasons
other than higher signaling rates.
> In general, application performance
On Tue, Mar 20, 2007 at 01:03:58PM +, Mattijs Janssens wrote:
> A non-intrusive test you could try is to replace your MPI (mpich) with a
> lower-latency one. Scali or MPI/Gamma are just to name two. These can lower
> your latency down to 15muS or so.
Anyone been able to make MPI/Gamma work
On Mar 23, 2007, at 2:53 PM, Gilad Shainer wrote:
What is also interesting to know, is when one uses InfiniBand 20Gb/
s he/she Can fully utilized the PCIe x8 link, while in your case,
Myricom I/O interface is the bottleneck.
Last I checked, we sell "10 Gb/s dual protocol NICs" which can spea
A non-intrusive test you could try is to replace your MPI (mpich) with a
lower-latency one. Scali or MPI/Gamma are just to name two. These can lower
your latency down to 15muS or so.
gamma is highly hardware dependent. does scali really provide a latency
improvement independent of hardware?
I
Jim Lux wrote:
>(with the icky aspect of some UPSes
> requiring active voltage to shut them down.. WHAT were they thinking?)
No kidding. Not only that, but when dealing with these UPS interfaces
a degree in mind reading and reverse engineering is required, since
the companies never publish full
I can't believe the GSL people invented a new interface for the one
numerical interface which is now universal. Not to mention that they
ignored lots of faster, free libraries like ATLAS and FFTW... what's
the point of reinventing the wheel badly?
The different interface was a little unconventio
***
Call for Papers
2007 IEEE International Conference on Cluster Computing
(Cluster2007)
17 - 21 September 2
Patrick,
> -Original Message-
> From: Patrick Geoffray [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 22, 2007 11:28 PM
> To: Gilad Shainer
> Cc: beowulf@beowulf.org
> Subject: Re: [Beowulf] Performance characterising a HPC application
>
> Gilad,
>
> Gilad Shainer wrote:
> >> -O
Mark wrote:
> but I also can't find this code
I re-introduced myself to some search engines other than google, and
found the following links relating to nxnlatbw.
On www.jux2.com, I found:
=
cse.ucdavis.edu/~bill/mpi_nxnlatbw.c
Redistribution and use in source
A non-intrusive test you could try is to replace your MPI (mpich) with a
lower-latency one. Scali or MPI/Gamma are just to name two. These can lower
your latency down to 15muS or so.
If this drastically ups your efficiency you know where your bottleneck is.
More intrusive is to change your MPI
Mark Hahn wrote:
1. processor bound.
2. memory bound.
oprofile is the only thing I know of that will give you this distinction.
In practice, I don't think it is given the usage characteristics I
mentioned in my previous mail.
3. interconnect bound.
with ethernet, this is obvious, since
"Peter St. John" <[EMAIL PROTECTED]> writes:
> I wish I know more about the SAGE (machine) that hosts the SAGE (software)
> that was used for this,
From what I understand, the SAGE software wasn't used, just the sage
machine.
> but apparently washington.edu's web server can't
> handle the CNN
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Patrick Geoffray
> Sent: Thursday, March 22, 2007 4:43 AM
> To: beowulf@beowulf.org
> Subject: Re: [Beowulf] Performance characterising a HPC application
>
> Greg Lindahl wrote:
> > On Wed, Mar 21, 200
I have been coding on the Cell for just over a month now. Nothing serious,
just getting subroutines to run on a single SPU. This turns out to be very
easy. Now I am looking at parallel programming between the SPUs and this
seems much more difficult. The API, as far as I have read it, does no
Hi,
Thanks for your reply, apologies for the delay in responding - St
Patrick's day celebrations temporarily got in the way :)
Michael Will wrote:
This is a very interesting topic.
First off it's interesting how different head and compute node are, and
that cpu utilisation is relatively lo
Deadline March 31st is approaching.
Apologies if you received multiple copies of this posting.
Please feel free to distribute it to those who might be interested.
-
I can't believe the GSL people invented a new interface for the one
numerical interface which is now universal. Not to mention that they
ignored lots of faster, free libraries like ATLAS and FFTW... what's
the point of reinventing the wheel badly?
The different interface was a little unconventio
Mark Hahn wrote:
> I don't follow why that indicts latency - multiple smaller packets don't
> each require a round trip, for instance. with TCP, I've only
> ever seen jumbo packets resulting in modestly higher bandwidth and often
> noticibly lower CPU overhead. TCP with 1500B packets will _certai
Hi,
Mark Hahn wrote:
> well, if the node is compute-bound, nearly all time will be user-time.
> if interconnect-bound, much time will be system or idle. if system time
> dominates, then cpu or memory is too slow. if there is idle time, you
> bottleneck is probably latency (perhaps network, but p
On Fri, 23 Mar 2007, David Mathog wrote:
experiment: consider a data center with 1000 separate 750VA
UPS systems, none of which have an EPO. Not very safe, is it?)
Especially if you rate it in terms of hours -- 750 KVA-hours (call it
600 KW-hrs) is a fair bit of energy no matter how it is re
Jim Lux wrote:
> Relays are your friend.
> EPO switch has Normally Closed contacts (i.e. they open when you hit
> the switch)
> Power flows through contacts to small relay with NO and NC contacts
> (you can use DC or AC supplies)
> relay contacts go to UPSes
> Multiple relays can be paralleled.
>
On Fri, 2007-03-23 at 03:23 -0400, Patrick Geoffray wrote:
> It is unbelievable that so few people denounce it. It is clearly
> implemented only to cheat on a micro-benchmark. What's next ? Checking
> that the buffer to send is identical to the previous one to avoid
> sending "redundant" message
I've been looking for an off the shelf EPO solution, but so far the only
things I've found are for huge data centers. Anybody know of a product
in my size range, to control roughly 10 EPO shut offs? There's a point
where adding more switch blocks becomes a bit awkward.
Relays are your frie
On Tue, 2007-03-20 at 10:34 +1100, Chris Samuel wrote:
> On Thu, 15 Mar 2007, Joe Landman wrote:
>
> > I seem to remember after my joyous year with Pascal in the early 80s that
> > they quickly caught the Modula fad (Niklaus Wirth could do no wrong),
> > dabbled a bit in other things, and came out
Tom Mitchell wrote:
> While thinking about grounding look at it with care
> from the UPS event point of view too.
Yes, I've been reviewing this, and the more I look the more complicated
it gets. The one bright point is that it turns out that most of the
UPS units in the room can be turned off
> real codes that computes a minimum. However, an alltoall on many
> cores/nodes would exercise the same metric (many sends/recvs on the same
> NIC at the same time), but would be harder to cheat and be much more
> meaningful IMHO.
Could not agree more. We are certainly seeing that Alltoall
At 05:13 AM 3/23/2007, Ashley Pittman wrote:
On Tue, 2007-03-20 at 10:50 +1100, Chris Samuel wrote:
> On Sat, 10 Mar 2007, Geoff Jacobs wrote:
>
> > Looks like people are seeing speeds of roughly 5MB/s up and 3MB/s down
> > with NFS. FTP is faster.
>
> I wonder if that can be improved if you repl
On Tue, 2007-03-20 at 10:50 +1100, Chris Samuel wrote:
> On Sat, 10 Mar 2007, Geoff Jacobs wrote:
>
> > Looks like people are seeing speeds of roughly 5MB/s up and 3MB/s down
> > with NFS. FTP is faster.
>
> I wonder if that can be improved if you replace the firmware with another
> Linux distro
30 matches
Mail list logo