On Wed, 28 Nov 2007, amjad ali wrote:
> high per-CPU memory performance. Each CPU (core in dual core
> systems) needs to have its own memory bandwith of roughly 2 or more
> gigabytes.
If this is indeed the case for your problems then you might find that
quad core systems don't cut it - a contact
On Tue, Nov 27, 2007 at 02:22:03AM -0800, Bill Broadley wrote:
> Hrm, why? Does context switches not scale with core speed? Or number
> of cores? Can't interrupts be spread across CPUs?
No, no, no, and kinda. Caches and main memory access cause problems.
There's a reason why high speed networ
amjad ali wrote:
Which Implementations of MPI (no matter commercial or free), make automatic
and efficient use of shared memory for message passing within a node. (means
which MPI librarries auomatically communicate over shared memory instead of
interconnect on the same node).
All of them. Pret
Hello,
Because today the clusters with multicore nodes are quite common and the
cores within a node share memory.
Which Implementations of MPI (no matter commercial or free), make automatic
and efficient use of shared memory for message passing within a node. (means
which MPI librarries auomatica
John Leidel wrote:
> We're running PBSPro.
>
> Your note on running a job a monitoring the PBS database is exactly what
> we're currently working on. I wasn't sure if there was an easier
> [undocumented] way of doing so as in Torque.
>
> Thanks for the help
I haven't used PBS Pro since 20
We're running PBSPro.
Your note on running a job a monitoring the PBS database is exactly what
we're currently working on. I wasn't sure if there was an easier
[undocumented] way of doing so as in Torque.
Thanks for the help
cheers
john
On Tue, 2007-11-27 at 19:12 -0500, Glen Beane wrote:
On Nov 27, 2007, at 1:52 PM, David Mathog wrote:
Michael Will wrote:
We have found that linpack is by far the better memory tester than
Memtest86+.
So now we have a report of a second method that finds more memory
problems than memtest86+. Can somebody please shed some light on why
these tw
Michael Will wrote:
> We have found that linpack is by far the better memory tester than
> Memtest86+. Memtest does not find all the bad RAM that linpack triggers,
> visible through the mcelog and through IPMI BMC logs.
We have seen the same results. Linpack, the EDAC modules and watching the MC
--- Original Message ---
> For those reading the list that run PBS, we have an interesting
> situation... After a fairly substantial crash, we've had to
> rebuild our
> PBS configuration. We've also been asked to reset to the PBS
> job id
> counter to a position near where it left off [for rea
> -Original Message-
> From: Mark Hahn [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 26, 2007 8:45 PM
> To: Ekechi Nwokah
> Cc: Beowulf Mailing List
> Subject: RE: [Beowulf] Software RAID?
>
> >> Of course there are a zillion things you didn't mention. How many
> >> drives did y
Mark Hahn wrote:
One thing I still don't get though, if memtester is catching memory
incidentally, are we talking about http://pyropus.ca/software/memtester/ ?
Hello, Mark.
Yes, Charles Cazabon's "memtester" program, which locks memory and tests
it from a process running in user space.
See below.
> -Original Message-
> From: Joe Landman [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 26, 2007 6:56 PM
> To: Ekechi Nwokah
> Cc: Bill Broadley; Beowulf Mailing List
> Subject: Re: [Beowulf] Software RAID?
>
> Ekechi Nwokah wrote:
> > Reposting with (hopefully) more read
For those reading the list that run PBS, we have an interesting
situation... After a fairly substantial crash, we've had to rebuild our
PBS configuration. We've also been asked to reset to the PBS job id
counter to a position near where it left off [for reasons of
accounting]. Has anyone ever don
One thing I still don't get though, if memtester is catching memory
incidentally, are we talking about http://pyropus.ca/software/memtester/ ?
thanks, mark hahn.
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or
David Mathog wrote:
Tony Travis wrote:
Memtest86+ is fine for 'burn-in' tests, but it does not do a realistic
memory stress test under the conditions that normal applications run.
Wow, deja vu. I just remembered we had almost exactly this same
discussion 2 years ago, in fact I apparently sen
Joe Landman wrote:
> Memtest and fellow travelers access memory in a very regular manner.
> Which is unlike the way most programs access memory.
Hmm, I see what you mean (although in my code I can often go through
memory in exactly the same "regular" manner). If the memory tester
doesn't specif
On Tue, Nov 27, 2007 at 11:52:40AM -0800, David Mathog wrote:
> Can somebody please shed some light on why
> these two programs find defects in memory that memtest86+ doesn't?
memtest86 is single-threaded? That's a big one. It's not exactly
trivial to add this to the source, either, as you have t
David Mathog wrote:
Michael Will wrote:
We have found that linpack is by far the better memory tester than
Memtest86+.
So now we have a report of a second method that finds more memory
problems than memtest86+. Can somebody please shed some light on why
Hi David:
We have been using so
On Nov 27, 2007 8:18 AM, Bill Broadley <[EMAIL PROTECTED]> wrote:
> >- high per-CPU memory performance. Each CPU (core in dual core
> >systems) needs to have its own memory bandwith of roughly 2 or more
> >gigabytes.
>
> Er, presumably thats 2 or more GB/sec.
>
> > For example, standard
G'day Gary and all
Have a squizz at GADGET2:
http://www.mpa-garching.mpg.de/gadget/
I used to run it for Local Group galaxy dynamics sims. Of course most of
the 'fun' was setting up the initial conditions :)
Cheers
Steve
___
Beowulf mailing list, Beo
Michael Will wrote:
> We have found that linpack is by far the better memory tester than
> Memtest86+.
So now we have a report of a second method that finds more memory
problems than memtest86+. Can somebody please shed some light on why
these two programs find defects in memory that memtest86
We have found that linpack is by far the better memory tester than
Memtest86+.
Memtest does not find all the bad RAM that linpack triggers, visible
through the mcelog and
through IPMI BMC logs. The nice thing about the BMC log entries is that
it actually tells
you which DIMM in which CPU-bank was
Tony Travis wrote:
> Memtest86+ is fine for 'burn-in' tests, but it does not do a realistic
> memory stress test under the conditions that normal applications run.
Wow, deja vu. I just remembered we had almost exactly this same
discussion 2 years ago, in fact I apparently sent you my hacked up
- high per-CPU memory performance. Each CPU (core in dual core
systems) needs to have its own memory bandwith of roughly 2 or more
gigabytes. For example, standard dual processor "PC's" will
notprovide better performance when the second processor is used, that
is, you
will not see speed-up
Forwarded on behalf of Harold P Boushell
Tony.
--
Dr. A.J.Travis, | mailto:[EMAIL PROTECTED]
Rowett Research Institute, |http://www.rri.sari.ac.uk/~ajt
Greenburn Road, Bucksburn, | phone:+44 (0)1224 712751
Aberdeen AB21 9SB, Scotland, UK.|
Hi Bill
Bill Broadley wrote:
Long reply, some actual numbers I've collected, if you read nothing
else please read the last paragraph.
Read it all. Thanks for the reply.
Joe Landman wrote:
Ekechi Nwokah wrote:
Hmmm... Anyone with a large disk count SW raid want to run a few
bonnie++ li
amjad ali wrote:
Hello,
I planned to buy 9 PCs each having one Core2Duo E6600 (networked with
GiGE) to make cluster for running PETSc based applications.
I got an advice that because the prices of Xeon Quadcore is going to
drop next month, so I should buy 9 PCs each having one Quadcore
Xeon
On Mon, Nov 26, 2007 at 10:06:03AM -0600, Robert Latham wrote:
>
> The word 'distributed' in the subject is telling... I like to make a
> distiction between 'distributed', 'cluster', and 'parallell' file
> systems.
>
> Distributed: uncorrdinated access among processes. Possibly over the
> wid
amjad ali wrote:
Hello,
I planned to buy 9 PCs each having one Core2Duo E6600 (networked with GiGE)
to make cluster for running PETSc based applications.
Ideally you would plan on buying $x of cluster instead of limiting your
choices to a particular number of PCs. There are 1,2,4 socket machin
Hello,
I planned to buy 9 PCs each having one Core2Duo E6600 (networked with GiGE)
to make cluster for running PETSc based applications.
I got an advice that because the prices of Xeon Quadcore is going to drop
next month, so I should buy 9 PCs each having one Quadcore Xeon (networked
with GiGE) t
Long reply, some actual numbers I've collected, if you read nothing
else please read the last paragraph.
Joe Landman wrote:
Ekechi Nwokah wrote:
Hmmm... Anyone with a large disk count SW raid want to run a few
bonnie++ like loads on it and look at the interrupt/csw rates? Last I
Bonnie++
31 matches
Mail list logo