Maybe "paraview"? Scientifc visualization kit, working in parallel.
"Ellis H. Wilson III" skrev:
>Jim,
>
>Not sure if this "demo application" reaches beyond your intent, or if
>it
>really falls into the demo category, but Graph500 has a solid benchmark
>
>(by the same name) for graph-related prob
Jim,
Not sure if this "demo application" reaches beyond your intent, or if it
really falls into the demo category, but Graph500 has a solid benchmark
(by the same name) for graph-related problems, and there is an MPI
version of it available in the source. You can scale the size of the
problem
Hi Jim,
What about something even simpler, nevertheless challenging:
matrix multiplication?
Slabs or blocks would be a major design choice.
Superlinear scaling could be observed, showing a major
performance advantage of parallelization on a cluster.
Best regards,
Max
__
On Wed, 21 Aug 2013, Douglas Eadline wrote:
>
>> Sorts in general.. Good idea.
>>
>> Yes, we'll do a distributed computing bubble sort.
>>
>> Interesting, though.. There are probably simple algorithms which are
>> efficient in a single processor environment, but become egregiously
>> inefficient w
Yeah, there was a POVray with parallelism that I've used. And a variety of
"video wall" kind of things.
What I was looking for was something that you could give as an assignment to a
student "go code this in parallel, using this MPI-lite style library, and
measure the performance". Rendering
Hi Peter,
Not hardly:
R. R. Coveyou (Knuth "Seminumerical Algorithms" 1981, 26-27):
u[0] % 4 == 2
u[k+1] = u[k]*(u[k] + 1) % (1< Max,
> Remarkable, thanks! I surely agree that in binary, doubling is fast. So you
> sort of bypass computing low powers, with
Max,
Remarkable, thanks! I surely agree that in binary, doubling is fast. So you
sort of bypass computing low powers, with an ancient method (?!) of
computing high powers efficiently. Very cool. So, everything is
parallelizable :-)
Peter
On Wed, Aug 21, 2013 at 3:22 PM, Max R. Dechantsreiter <
m.
Hi Peter,
> That's interesting, where can I read about "giant-stepping the generator"?
> The wiki article
> http://en.wikipedia.org/wiki/Linear_congruential_generator doesn't
> mention distributed processing.
The term "giant-stepping" may not be in general use
The idea is to find an efficient
Max,
That's interesting, where can I read about "giant-stepping the generator"?
The wiki article
http://en.wikipedia.org/wiki/Linear_congruential_generator doesn't
mention distributed processing.
Thanks,
Peter
On Wed, Aug 21, 2013 at 2:44 PM, Max R. Dechantsreiter <
m...@performancejones.com> wro
Hi Peter,
> What about the old random number generator: take a 16 bit seed, square it,
> take the middle 16 bits, and repeat. They'd want a large number in order
> (so you can repeat an experiment, or a run of a model, with the same
> "random" numbers), and it's easy to computer sequentially; but
Hi Jim,
> Interesting, though.. There are probably simple algorithms which are
> efficient in a single processor environment, but become egregiously
> inefficient when distributed.
Try anything requiring memory synchronization (such as
atomic memory updates).
Cheers,
Max
___
Hi Douglas,
Yes, "IS" - also "GUPS" is closely related (and easier
to code, aside from its formal "lookahead" constraints).
But I recommend crafting one's own, in order to have
control over the key distribution: teach a lesson in
load-balancing!
Regards,
Max
---
On Wed, 21 Aug 2013, Douglas
On Aug 21, 2013, at 12:57 PM, John Hearns wrote:
>
>
> (*) ps.whatever happened to my 'first love' - emitter coupled logic?
> I spent many happy hours as a graduate student (**) in learning about FASTBUS
> for (at the time)
> blazingly fast DAQ - because ECL goes faster.
> I guess no-one can
http://www.theregister.co.uk/2013/08/21/unsung_heroes_dr_chris_shelton/
worth sticking with thsi article till page 4 - or indeed just advance
straight to page 4.
The concepts behind that PgC7000 processor were pretty revolutionary!
An Ultra-RISC core, overclocking itself as much as it could, deco
Regarding
"...There are probably simple algorithms which are
efficient in a single processor environment, but become egregiously
inefficient when distributed..."
What about the old random number generator: take a 16 bit seed, square it,
take the middle 16 bits, and repeat. They'd want a large numbe
> Sorts in general.. Good idea.
>
> Yes, we'll do a distributed computing bubble sort.
>
> Interesting, though.. There are probably simple algorithms which are
> efficient in a single processor environment, but become egregiously
> inefficient when distributed.
e.g. The NAS parallel suite has an
Sorts in general.. Good idea.
Yes, we'll do a distributed computing bubble sort.
Interesting, though.. There are probably simple algorithms which are
efficient in a single processor environment, but become egregiously
inefficient when distributed.
Jim
On 8/20/13 12:11 PM, "Max R. Dechantsreit
17 matches
Mail list logo