I'm really enjoying the PDF there,
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf,
particularly, the part about optimizing networks for applications. But the
nice general interest takeaway is that list of Old Conventional Wisdom
paired with New, that's fun. Thanks.
Peter
Old CW: Power is free, transistors are expensive.
New CW: (from the pdf) The transistors are free, but the power is expensive
because you can't afford to power all the transistors on the chip.
Synthesis: Power and Transistors are free, but Density is expensive.


On 3/15/07, Thomas H Dr Pierce <[EMAIL PROTECTED]> wrote:


Dear Beowulf ML,

Here is an interesting discussion on the methods and metrics that could
apply to multicore chips and clusters.

I have not seen this discussed on the list, so it may be new to some.

Here is the link to the overview wiki
http://view.eecs.berkeley.edu/wiki/Main_Page

And motivation for people to go to the link: quote from the wiki

"We believe that much can be learned by examining the success of
parallelism at the extremes of the computing spectrum, namely embedded
computing and high performance computing. This led us to frame the parallel
landscape with seven question under the following assumptions:

   - The target should be 1000s of cores per chip, as this hardware is
   the most efficient in MIPS per watt, MIPS per area of silicon, and MIPS per
   development dollar.
   - Instead of traditional benchmarks, use 7+ 
"dwarfs<http://view.eecs.berkeley.edu/wiki/Dwarfs>"
   to design and evaluate parallel programming models and architectures. (A
   dwarf is an algorithmic method that captures a pattern of computation and
   communication.)
   - "Autotuners" should play a larger role than conventional compilers
   in translating parallel programs.
   - To maximize programmer productivity, programming models should be
   independent of the number of processors.
   - To maximize application efficiency, programming models should
   support a wide range of data types and successful models of parallelism:
   data-level parallelism, independent task parallelism, and instruction-level
   parallelism. "



And the detailed white paper that  started his (Dec 2006)
http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html

The "Seven Questions" discuss approaches to what seem to be standard
discussions on cluster, parallel programming and best practices.  Lots of
fun for everyone!

Applications

1. What are the applications?
2. What are common kernels of the applications?

Architecture and Hardware

3. What are the HW building blocks?
4. How to connect them?

Programming Model and Systems Software

5. How to describe applications and kernels?
6. How to program the hardware?

Evaluation

7. How to measure success?

------
Sincerely,

  Tom Pierce



_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf


_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to