On Apr 5, 2011, at 11:10 AM, Christopher Samuel wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 05/04/11 05:26, Vincent Diepeveen wrote:
>
>> GPU's completely annihilate cpu's everywhere.
>
> Great! Where can I get one with 1TB of on-card RAM to
> keep our denovo reassembly peopl
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 05/04/11 05:26, Vincent Diepeveen wrote:
> GPU's completely annihilate cpu's everywhere.
Great! Where can I get one with 1TB of on-card RAM to
keep our denovo reassembly people happy ?
- --
Christopher Samuel - Senior Systems Administrator
On Apr 5, 2011, at 9:58 AM, Vincent Diepeveen wrote:
>
> On Apr 5, 2011, at 7:22 AM, Greg Lindahl wrote:
>
>> On Mon, Apr 04, 2011 at 09:54:37AM -0700, Massimiliano Fatica wrote:
>>
>>> If you are old enough to remember the time when the first distribute
>>> computers appeared on the scene,
>>> t
On Apr 5, 2011, at 7:22 AM, Greg Lindahl wrote:
> On Mon, Apr 04, 2011 at 09:54:37AM -0700, Massimiliano Fatica wrote:
>
>> If you are old enough to remember the time when the first distribute
>> computers appeared on the scene,
>> this is a deja-vu.
>
> Not to mention the prior appearance of arr
On Mon, Apr 04, 2011 at 09:54:37AM -0700, Massimiliano Fatica wrote:
> If you are old enough to remember the time when the first distribute
> computers appeared on the scene,
> this is a deja-vu.
Not to mention the prior appearance of array processors. Oil+Gas
bought a lot of those, too. Some imp
well, for your application, which is quite narrow.
>>>
>>> Which is about any relevant domain where massive computation takes place.
>>
>> you are given to hyperbole. the massive domains I'm thinking of
>> are cosmology and explicit quantum condensed-matter calculations.
>> the experts in t
On Apr 4, 2011, at 11:54 PM, Mark Hahn wrote:
>>> well, for your application, which is quite narrow.
>>
>> Which is about any relevant domain where massive computation takes
>> place.
>
> you are given to hyperbole. the massive domains I'm thinking of
> are cosmology and explicit quantum conde
>> well, for your application, which is quite narrow.
>
> Which is about any relevant domain where massive computation takes place.
you are given to hyperbole. the massive domains I'm thinking of
are cosmology and explicit quantum condensed-matter calculations.
the experts in those fields I talk
On Apr 4, 2011, at 10:20 PM, Mark Hahn wrote:
>> GPU's completely annihilate cpu's everywhere.
>
> this is complete nonsense. GPUs do very nicely on a quite narrow
> set of problems. for a somewhat larger set of problems, they do OK,
> but pretty "meh", really, considering. for many problems
> GPU's completely annihilate cpu's everywhere.
this is complete nonsense. GPUs do very nicely on a quite narrow
set of problems. for a somewhat larger set of problems, they do OK,
but pretty "meh", really, considering. for many problems, GPUs
are irrelevant, whether that's because the proble
On Apr 4, 2011, at 6:54 PM, Massimiliano Fatica wrote:
> If you are old enough to remember the time when the first distribute
> computers appeared on the scene,
> this is a deja-vu. Developers used to program on shared memory (
> mostly with directives) were complaining
> about the new programmin
you can forget about getting much info other than marketing data.
the companies and orgainsations that already calculate for years at
gpu's
they are really good in keeping their mouth shut.
But if you realize that even with 16 fast AMD cores (which for this
specific
prime number code are a LO
If you are old enough to remember the time when the first distribute
computers appeared on the scene,
this is a deja-vu. Developers used to program on shared memory (
mostly with directives) were complaining
about the new programming models ( PVM, MPL, MPI).
Even today, if you have a serial code th
rom: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org] On
> Behalf Of Herbert Fruchtl
> Sent: Monday, April 04, 2011 8:16 AM
> To: beowulf@beowulf.org
> Subject: Re: [Beowulf] GP-GPU experience
>
> They hear great success stories (which in reality are often prot
Herbert Fruchtl wrote:
> They hear great success stories (which in reality are often prototype
> implementations that do one carefully chosen benchmark well), then look at
> the
> API, look at their existing code, and postpone the start of their project
> until
> they have six months spare tim
They hear great success stories (which in reality are often prototype
implementations that do one carefully chosen benchmark well), then look at the
API, look at their existing code, and postpone the start of their project until
they have six months spare time for it. And we know when that is.
On 03/30/2011 06:42 PM, Orion Poplawski wrote:
> On 03/21/2011 06:51 AM, Douglas Eadline wrote:
>> I got to thinking about how others are fairing (or not)
>> with GP-GPU technology. I put up a simple poll on
>> ClusterMonkey to help get a general idea.
>> (you can find it on the front page right to
On 03/21/2011 06:51 AM, Douglas Eadline wrote:
> I got to thinking about how others are fairing (or not)
> with GP-GPU technology. I put up a simple poll on
> ClusterMonkey to help get a general idea.
> (you can find it on the front page right top)
> If you have a moment, please provide
> your expe
On Mon, Mar 21, 2011 at 08:51:06AM -0400, Douglas Eadline wrote:
>
> I was recently given a copy of "GPU Computing Gems"
> to review. It is basically research quality NVidia success
> stories, some of which are quite impressive.
This one?
http://www.amazon.com/GPU-Computing-Gems-Emerald-Applicat
I was recently given a copy of "GPU Computing Gems"
to review. It is basically research quality NVidia success
stories, some of which are quite impressive.
I got to thinking about how others are fairing (or not)
with GP-GPU technology. I put up a simple poll on
ClusterMonkey to help get a general
20 matches
Mail list logo