> I call this Pretty High Performance Computing (PHPC).
Or high productivity computing?
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
2% comeon,
How do you plan to lose 'just 2%' if you make a lot of use from MPI?
let's be realistic; with respect to matrix calculations HPC can be
relative efficient.
As soon as we discuss algorithms that have the habit to be
sequential, then they are
rather hard to parallellize at a HPC box
On Tue, 23 Sep 2008, Jon Forrest wrote:
Given the recent discussion of whether running
multiple services and other such things affects
the running of a cluster, I'd like to propose
a new classification of computing.
I call this Pretty High Performance Computing (PHPC).
This is a style of comput
that, perhaps serendipitously, these service level delays due to nodes
not being completely optimized for cluster use don't result in a
significant reduction of computation speed until the size of the
cluster is about at the point where one would want a full-time admin
just to run the cluster.
n
On Wed, Sep 24, 2008 at 02:45:52PM +, [EMAIL PROTECTED] wrote:
> A point of interest here is that reducing these service-related
> interrupts was an important element in improving the HPL efficiency
> of Windows HPC Server 2008 (over 2003) from sub 70% levels to closer
> to 80%.
If you have l
On Wed, Sep 24, 2008 at 01:35:10PM -0400, Robert G. Brown wrote:
> So the tradeoff is really a familiar one. Code/Data efficiency vs
> Code/Data readability and robustness.
The depressing part about this is that XML proponents are unusually
blind to the unreadability, unportability, and lack of
On Wed, 24 Sep 2008, Robert G. Brown wrote:
> On Tue, 23 Sep 2008, Donald Becker wrote:
>
> >> XML is (IMO) good, not bad.
> >
> > I have so much to write on this topic, I'll take the first pot shot at RGB
> > ;-)
> >
> > XML is evil. Well, evil for this.
>
> Oh, it's fine. I've gone the round
On Tue, 23 Sep 2008, Eric Thibodeau wrote:
> Ashley Pittman wrote:
> > On Mon, 2008-09-22 at 15:44 -0400, Eric Thibodeau wrote:
> >> Ashley Pittman wrote:
> >>> On Mon, 2008-09-22 at 14:56 -0400, Eric Thibodeau wrote:
> >>>
> >>> If it were up to me I'd turn *everything* possible off except
On Tue, 23 Sep 2008, Donald Becker wrote:
XML is (IMO) good, not bad.
I have so much to write on this topic, I'll take the first pot shot at RGB
;-)
XML is evil. Well, evil for this.
Oh, it's fine. I've gone the rounds on this one with Linus Himself (who
agrees with you, BTW:-).
However,
Gerry,
As a former installer/patsy at one of those nameless clumsy hardware
vendors, I thought this *may* be useful for you:
1. We specified "No OS" in the purchase so that we could install
CentOS
as our base. We got a set of systems with a stub OS, and an EULA for
the diagnostics embedd
bringing up the old pun: Semper ubi, sub ubi.
James Lux, P.E.
Task Manager, SOMD Software Defined Radios
Flight Communications Systems Section
Jet Propulsion Laboratory
4800 Oak Grove Drive, Mail Stop 161-213
Pasadena, CA, 91109
+1(818)354-2075 phone
+1(818)393-6875 fax
__
2008/9/24 Ellis Wilson <[EMAIL PROTECTED]>
>
> This assumes my understanding of middleware is correct in that it is a
> package or entire system that simplifies things by being somewhat
> blackboxed and ready to go. Anything canned like tuna is bound to
> contain too much salt.
>
> I believe that
www.gridswatch.com | Training
http://www.gridswatch.com/index.php?option=com_content&task=view&id=25&Itemid=16
We have opened up registration for the Intermediate SGE class in October
and an Introduction to Beowulf Clusters class in January (2009). If you
have any questions, please e-mail or
-- Original message --
From: Patrick Geoffray [EMAIL PROTECTED]
> However, it is only important for large machines with tightly coupled
> codes. For the majority of the cases, it's just being anal.
A point of interest here is that reducing these service-related interru
Prentice Bisbal wrote:
Oops. e-mailed to the wrong address. The cat's out of the bag now! No
big deal. I was 50/50 about CC-ing the list, anyway. Just remove the
phrase "off-list" in the first sentence, and that last bit about not
posting to the list because...
Great. I'll never get a job that
Patrick Geoffray <[EMAIL PROTECTED]> writes:
> Perry E. Metzger wrote:
>>> You realize that most big HPC systems are using interconnects that
>>> don't generate many or any interrupts, right?
>>
>> Of course. Usually one even uses interrupt pacing/mitigation even in
>> gig ethernet on a modern mac
Lawrence Stewart <[EMAIL PROTECTED]> writes:
> I think Greg is talking about HPC interconnects that do OS bypass, and
> Perry is talking about the kernel IP stack. Different things.
True enough. Architectures where the data gets passed to/from userland
directly have different issues. However, yo
>
> Given the recent discussion of whether running
> multiple services and other such things affects
> the running of a cluster, I'd like to propose
> a new classification of computing.
>
> I call this Pretty High Performance Computing (PHPC).
> This is a style of computing where you sacrifice
> ab
Middleware is something that goes in between, hence the name middleware.
In the case of HPC, I would call ROCKS or Platform OCS middleware. These
software packages go in between the administrator and the actual cluster
software/OS configuration to make things easier to configure. They are
in the mi
On Tue, 2008-09-23 at 22:16 -0400, Lawrence Stewart wrote:
> I'm starting work on a shmem implementation for the SiCortex systems.
>
> Is anyone aware of available test suites or API benchmark suites for
> shmem? I am thinking of the equivalent of the Intel MPI tests or
> Intel MPI Benchmarks, aw
20 matches
Mail list logo