Prentice Bisbal wrote:
Micha Feigin wrote:
I'm trying to learn cluster/parallel programing properly. I've got some
information on MPI, although I'm not sure if it's the best books. I was
wondering if you have some book recommendations regarding the more specialized
things, especially the cpu vs gpu paralelization issue (or as far as I
understood, data intensive vs memory intensive programming and how to
convert programs from one to the other) proper program/cluster topology
considerations and such.
As similar question was asked a couple of weeks ago:
http://www.beowulf.org/archive/2008-December/024078.html
http://www.beowulf.org/archive/2008-December/024079.html
GPU parallelization is relatively new, so I doubt there are any complete
books on the topic. There have been plenty of talks on it and research
done in the area, so google is your best bet for that. NVidia does have
plenty of documentation related to CUDA on their website:
http://www.nvidia.com/object/cuda_home.html
Hello Mischa and list
Prentice already referred to
Peter Pacheco's book "Parallel Programming with MPI":
http://www.cs.usfca.edu/mpi/
and RGB to Ian Foster's more general
book "Designing and Building Parallel Programs":
http://www-unix.mcs.anl.gov/dbpp/
GPU programming lacks a common standard API,
the literature is small and vendor-dependent,
and Prentice already pointed you to the
current leader, NVidia and CUDA:
http://www.nvidia.com/object/cuda_home.html
Another pair of good MPI books (with some typos),
are:
William Gropp et al., Using MPI:
http://www-unix.mcs.anl.gov/mpi/usingmpi/
and
William Gropp et al., Using MPI-2:
http://www-unix.mcs.anl.gov/mpi/usingmpi2/index.html
You can find the detailed syntax and semantics of MPI commands on"
Marc Snir et al.
MPI: The Complete Reference (Vol. 1), 2nd Edition:
*Volume 1 - The MPI Core
*http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=5045
and
William Gropp et al.
MPI: The Complete Reference (Vol. 2)
*Volume 2 - The MPI-2 Extensions
*
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=4579
Lawrence Livermore National Lab has good tutorials,
and other references (MPI and OpenMP):
https://computing.llnl.gov/mpi/documentation.html
https://computing.llnl.gov/tutorials/mpi/
For OpenMP a good source
is Rohit Chandra et al. "Parallel Programming in OpenMP":
http://www.amazon.com/Parallel-Programming-OpenMP-Rohit-Chandra/dp/1558606718
There are also two slide tutorials on OpenMP from Ohio Supercomputer Center:
http://www.osc.edu/supercomputing/training/openmp/
http://www.osc.edu/supercomputing/training/openmp/openmp_0704.pdf
http://www.osc.edu/supercomputing/training/openmp/openmp_0311.pdf
I collected some parallel programming resources for our users here:
http://fats-raid.ldeo.columbia.edu/pages/parallel_programming.html
I hope this helps,
Gus Correa
---------------------------------------------------------------------
Gustavo Correa, PhD - Email: g...@ldeo.columbia.edu
Lamont-Doherty Earth Observatory - Columbia University
P.O. Box 1000 [61 Route 9W] - Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf