Sorry if this is inappropriate here. I'm finally growing from clusters
of single CPUs to a machine with multiple CPUs, which means that I need
to start taking note of NUMA issues. I'm looking for information on how
to achieve that with mpi under linux. I'm currently using mpich2, but I
don't mind s
They are just busting the one teraflop but they are going with it into the GPU
market, only without a GPU, i.e. they're competing with the Tesla GPU here. The
Tesla admittedly is also about 1 TFlops but the consumer market has already
gone past the 2 TFlop mark about a year ago and the next gene
A bit off topic, so sorry, but it looks like a place where people who learned
these things at some point hand out ...
I've been asked to write a course on the subject of optimizing code. As it's
hard to translate knowledge into an actual course, I was wondering if anyone
here has references to eit
On 01/09/2010 15:18, Fumie Costen wrote:
> Dear All, I believe some of you have come across the phrase "GPU
> computation".
> I have an access to GPU cluster remotely but the speed of data transfer
> between the GPUs seems to be pretty slow from the specification and I
> feel this
> is going to be
On Mon, 8 Mar 2010 10:39:08 -0500
Glen Beane wrote:
>
>
>
> On 3/8/10 10:14 AM, "Micha Feigin" wrote:
>
> I have a small local cluster in our lab that I'm trying to setup with minimum
> hustle to support both cpu and gpu processing where only some of the
I have a small local cluster in our lab that I'm trying to setup with minimum
hustle to support both cpu and gpu processing where only some of the nodes have
a gpu and those have only two gpu for four cores.
It is currently setup using torque from ubuntu (2.3.6) with the torque supplied
scheduler
On Mon, 15 Feb 2010 20:41:08 -0500
Joe Landman wrote:
> Rahul Nabar wrote:
> > This was the response from Dell, I especially like the analogy:
> >
> > [snip]
> >> There are a number of benefits for using Dell qualified drives in
> >> particular ensuring a ***positive experience*** and protecting
On 01/02/2010 22:54, richard.wa...@comcast.net wrote:
Jon Forrest wrote:
>On 2/1/2010 7:24 AM, richard.wa...@comcast.net wrote:
>
>> Coming in on this late, but to reduce this work load there is PGI's
version
>> 10.0 compiler suite which supports accelerator compiler directives. This
>> w
On 01/02/2010 00:06, Mark Hahn wrote:
Be very very sure that consumer geforces can go in 1u boxes. It's not
so much
the space as much as I'm skeptical with their ability of handling the
thermal
issues. They are just not designed for this kind of work.
I've had to go to 2u and eventually to larg
On Sun, 31 Jan 2010 21:15:12 +0300
"C. Bergström" wrote:
> Micha Feigin wrote:
> > On Sat, 30 Jan 2010 17:30:31 -0800
> > Jon Forrest wrote:
> >
[snip]
> >
> >>
> >
> > People are starting to work with OpenCL but I don't t
On Sat, 30 Jan 2010 17:30:31 -0800
Jon Forrest wrote:
> On 1/30/2010 2:52 PM, "C. Bergström" wrote:
>
> > Hi Jon,
> >
> > I must emphasize what David Mathog said about the importance of the gpu
> > programming model.
>
> I don't doubt this at all. Fortunately, we have lots
> of very smart peopl
On Sat, 30 Jan 2010 10:24:09 -0800
Jon Forrest wrote:
> On 1/30/2010 4:31 AM, Micha Feigin wrote:
>
> > It is recommended BTW, that you have at least the same amount of system
> > memory
> > as GPU memory, so with tesla it is 4GB per GPU.
>
> I'm not going
On Thu, 28 Jan 2010 09:38:14 -0800
Jon Forrest wrote:
> I'm about to spend ~$20K on a new cluster
> that will be a proof-of-concept for doing
> GPU-based computing in one of the research
> groups here.
>
> A GPU cluster is different from a traditional
> HPC cluster in several ways:
>
> 1) The C
On Mon, 31 Aug 2009 12:28:43 -0400
Gus Correa wrote:
> Hi Amjad
>
> 1. Beware of hardware requirements, specially on your existing
> computers, which may or may not fit a CUDA-ready GPU.
> Otherwise you may end up with a useless lemon.
>
> A) Not all NVidia graphic cards are CUDA-ready.
> NVidi
On Mon, 31 Aug 2009 16:13:55 +0200
Jonathan Aquilina wrote:
> >One thing that's not mentioned out loud by NVIDIA (I have read only in
> >CUDA programming manual) is that if the video system needs more memory
> >that's not available(say you change resolution, while you're waiting
> >for your proce
On Sun, 30 Aug 2009 04:35:30 +0500
amjad ali wrote:
> Hello all, specially Gil Brandao
>
> Actually I want to start CUDA programming for my |C.I have 2 options to do:
> 1) Buy a new PC that will have 1 or 2 CPUs and 2 or 4 GPUs.
> 2) Add 1 GPUs to each of the Four nodes of my PC-Cluster.
>
> Wh
I was wondering for a core2 machine with two ddr2 channels. What whould give
the best preformance in term of the number of memory sticks.
As far as I know with core i7 it is best to put one memory stick per channel,
but as the core2 still connects via the north side bus I was wondering if
things a
On Mon, 20 Apr 2009 15:46:34 -0500
Rahul Nabar wrote:
> I am trying to simulate worst-case behavior of a job on our cluster
> where one job would get 8 cpus but one each from a different
> compute-server. Each server has 8cpus.
>
> How can I do that? Server names: noce01 through node23. Schedule
On Fri, 10 Apr 2009 05:30:55 -0400
"Peter St. John" wrote:
> One of the things I like about VIM is that I can install it everywhere. I
> use it on VMS, as well as unices and MSWin. That vivivi is the Editor of the
> Beast just adds flavor :-) vim.org has MSWin self-extracting executable.
> Peter
On Tue, 17 Feb 2009 05:56:18 -0600
Geoff Jacobs wrote:
> Nifty Tom Mitchell wrote:
> > On Mon, Feb 16, 2009 at 05:06:17PM +0530, Indrajit Deb wrote:
> >>Hello,
> >>I want to setup beowulf for my eight node cpu to run simulation. Debian
> >>4.0 is installed in each cpu. I am not famili
On Thu, 22 Jan 2009 22:40:25 -0800
Greg Lindahl wrote:
> On Fri, Jan 23, 2009 at 11:03:46AM +0500, amjad ali wrote:
>
> > (1) Which debugger would be easy and effective to use for above?
>
> print *,
>
openmpi spawns several processes so you options are pretty much print or
pausing you progra
I'm trying to learn cluster/parallel programing properly. I've got some
information on MPI, although I'm not sure if it's the best books. I was
wondering if you have some book recommendations regarding the more specialized
things, especially the cpu vs gpu paralelization issue (or as far as I
under
22 matches
Mail list logo