My colleague Bill Van Etten appears to have fixed the SGE and Open
Directory issues with recent versions of Grid Engine (SGE). Some minor
patches are required as is compiling the binaries yourself from source
(until changes get merged into the codebase).
This document describes the patch
Just a silly question, Is there anybody got the similiar benchmark for Samba or
Windows file sharing?
Regards,
Li, Bo
- Original Message -
From: "Joe Landman" <[EMAIL PROTECTED]>
To: "Beowulf Mailing List"
Sent: Thursday, February 28, 2008 9:53 PM
Subject: [Beowulf] What are people seei
Hallo Joe,
Donnerstag, 28. Februar 2008, meintest Du:
JL>Have a few simple setups that I am trying to figure out if I have a
JL> problem, or if what I am seeing is normal.
JL>Two 10 GbE cards, connected with a CX4 cable. Server is one of our
JL> JackRabbits, with 750+ MB/s direct IO and
Hallo Joe,
Donnerstag, 28. Februar 2008, meintest Du:
>> The best i saw for NFS over 10 GE was about 350-400 MB/s write and about 450
>> MB/s read.
>> Single server to 8 simultaneous accessing clients (aggregated performance).
The clients had a 1 gig uplink...
JL> Hi Jan:
JL>Ok. Thank
STREAM Benchmark implementation in CUDA
Array size (single precision)=800
using 128 threads per block, 62500 blocks
Function Rate (MB/s) Avg time Min time Max time
Copy: 16706.3212 0.0039 0.0038 0.0044
Scale: 1.2770 0.0046 0.0038
How do your Rate numbers correlate to the max bandwitdh of 32GB/s
(http://en.wikipedia.org/wiki/GeForce_8_Series)?
good point. I had assumed the quoted numbers were merely in-cache,
but it does claim to be running on array size 2e6 (8e6 bytes),
which seems a bit large for in-cache. (though ver
Jan Heichler wrote:
Hallo Joe,
Donnerstag, 28. Februar 2008, meintest Du:
[...]
The best i saw for NFS over 10 GE was about 350-400 MB/s write and about 450
MB/s read.
Single server to 8 simultaneous accessing clients (aggregated performance).
Hi Jan:
Ok. Thanks. This is quite helpfu
Mattijs Janssens wrote:
How do your Rate numbers correlate to the max bandwitdh of 32GB/s
(http://en.wikipedia.org/wiki/GeForce_8_Series)?
Or do these threads all operate on the same data?
My first guess was some kind of caching, after all 2M floats is only 8MB. But
I couldn't reproduct it
Jim,
Just re:
"If one wanted to design revolutionary distributed/parallel computing
algorithms, one could probably work with floppy disks and sneakernet. If it
works there, it will certainly work on any faster mechanism.
See.. true computer science doesn't need a 1000 processor cluster."
I agree
How do your Rate numbers correlate to the max bandwitdh of 32GB/s
(http://en.wikipedia.org/wiki/GeForce_8_Series)?
Or do these threads all operate on the same data?
Mattijs
On Thursday 28 February 2008 08:38, John Hearns wrote:
> On Wed, 2008-02-27 at 23:30 -0800, Bill Broadley wrote:
> > I don
Anyone knows an Open source job scheduler for Apple Leopard 10.5.2 server that
will work with Open Directory? SGE seems to have intermittent problems, Condor
and Torque supports only 10.4. Tiger. Thanks.
Prakashan Korambath
___
Beowulf mailing list,
On Feb 28, 2008, at 4:33 PM, Mark Hahn wrote:
The problem with many (cores|threads) is that memory bandwidth
wall. A fixed size (B) pipe to memory, with N requesters on
that pipe ...
What wall? Bandwidth is easy, it just costs money, and not much
at that. Want 50GB/sec[1] buy a $170 v
Quoting Mark Hahn <[EMAIL PROTECTED]>, on Thu 28 Feb 2008 07:33:07 AM PST:
The problem with many (cores|threads) is that memory bandwidth
wall. A fixed size (B) pipe to memory, with N requesters on
that pipe ...
What wall? Bandwidth is easy, it just costs money, and not much
at th
The problem with many (cores|threads) is that memory bandwidth wall. A
fixed size (B) pipe to memory, with N requesters on that pipe ...
What wall? Bandwidth is easy, it just costs money, and not much at that.
Want 50GB/sec[1] buy a $170 video card. Want 100GB/sec... buy a
Heh... if it
Quoting Joe Landman <[EMAIL PROTECTED]>, on Thu 28 Feb
2008 05:20:01 AM PST:
Bill Broadley wrote:
The problem with many (cores|threads) is that memory bandwidth
wall. A fixed size (B) pipe to memory, with N requesters on that
pipe ...
What wall? Bandwidth is easy, it just costs money
Hi folks:
Have a few simple setups that I am trying to figure out if I have a
problem, or if what I am seeing is normal.
Two 10 GbE cards, connected with a CX4 cable. Server is one of our
JackRabbits, with 750+ MB/s direct IO and 500-650 MB/s buffered IO
(read-write), for IO about 10x s
Bill Broadley wrote:
The problem with many (cores|threads) is that memory bandwidth wall.
A fixed size (B) pipe to memory, with N requesters on that pipe ...
What wall? Bandwidth is easy, it just costs money, and not much at
that. Want 50GB/sec[1] buy a $170 video card. Want 100GB/sec... bu
On Wed, 2008-02-27 at 23:30 -0800, Bill Broadley wrote:
>
> I don't see any particular reason why memory bandwidth can go through a full
> doublings in the near future if there was a market for it, last I checked
> nvidia was doing pretty well ;-)
>
> [1] Sorry to use marketing bandwidth, I've
18 matches
Mail list logo