On Wed, 23 Jul 2008, Bob Drzyzgula wrote:
Although it wasn't my first machine [1], I did work with an
admittedly-old-at-the-time PDP-8 for a while in the early
1980s. It was used to run a Perkin-Elmer microdensitometer
(think quarter-million-dollar film scanner). IIRC it had
no non-volatile mem
Perry E. Metzger wrote:
"David Mathog" <[EMAIL PROTECTED]> writes:
A vendor who shall remain nameless graced us with a hot swappable drive
caddy in which one of the three mounting screws used to fasten the drive
to the caddy had been treated with blue LocTite. This wasn't obvious
from external
On Wed, Jul 23, 2008 at 09:06:03PM -0400, Perry E. Metzger wrote:
>
> "Robert G. Brown" <[EMAIL PROTECTED]> writes:
> > Note that Bob and I started out on systems with far less than 100 MB
> > of DISK and perhaps a MB of system memory on a fat SERVER in the
> > latter 80's. And the P(o)DP(eople)
On Wed, 23 Jul 2008, Kilian CAVALOTTI wrote:
On Wednesday 23 July 2008 01:37:16 pm Robert G. Brown wrote:
But show me a "programmer" who cannot work without their mouse
and a GUI-based text editor, who has to scroll slowly up and down or
constantly move hands from the keys to the mouse and back
Too much information.
Robert G. Brown wrote:
On Tue, 22 Jul 2008, Peter St. John wrote:
Fair enough, I'll settle for Gary Oldman. We'll let RGB have Anthony
Hopkins.
No, no, no. John Malkovitch.
The resemblance is actually fairly striking. Bald, pudgy, whiny
sardonic voice, sexy as all he
If the hot-swappable drives are sold by the nameless vendor
pre-installed in the caddy, it is possible that the LocTite's primary
purpose was for tamper evidence, as in "if you pulled that screw you
must have been messing with the drives and we don't have honor the
warranty no more".
Perry E.
"Robert G. Brown" <[EMAIL PROTECTED]> writes:
> Note that Bob and I started out on systems with far less than 100 MB
> of DISK and perhaps a MB of system memory on a fat SERVER in the
> latter 80's. And the P(o)DP(eople) made do with even less in the
> early 80's.
My first machine was a PDP-8. 4
"David Mathog" <[EMAIL PROTECTED]> writes:
> A vendor who shall remain nameless graced us with a hot swappable drive
> caddy in which one of the three mounting screws used to fasten the drive
> to the caddy had been treated with blue LocTite. This wasn't obvious
> from external inspection, but th
Schoenefeld, Keith wrote:
My cluster has 8 slots (cores)/node in the form of two quad-core
processors. Only recently we've started running jobs on it that require
12 slots. We've noticed significant speed problems running multiple 12
slot jobs, and quickly discovered that the node that was runni
On Wednesday 23 July 2008 01:37:16 pm Robert G. Brown wrote:
> But show me a "programmer" who cannot work without their mouse
> and a GUI-based text editor, who has to scroll slowly up and down or
> constantly move hands from the keys to the mouse and back to select
> even elementary functions, and
On Mon, 21 Jul 2008, Huw Lynes wrote:
The advantage of smart PDUs is that they can switch off anything whereas
IPMI and other lights-out systems usually only exist on computers. All
things being equal I'd rather have both.
It's also entertaining watching upper management do a doubletake when y
A vendor who shall remain nameless graced us with a hot swappable drive
caddy in which one of the three mounting screws used to fasten the drive
to the caddy had been treated with blue LocTite. This wasn't obvious
from external inspection, but the telltale blue glop was on the threads
when the scr
On Sat, Jul 19, 2008 at 01:40:59PM -0700, [EMAIL PROTECTED] wrote:
> Thanks for your suggestions. Let me be more specific.
> I would like to have nodes automatically wake up when
> needed and go to sleep when idle for some time. My
> ganglia logs tell me that there is considerable idle
> time on ou
Hi,
Am 22.07.2008 um 23:54 schrieb Schoenefeld, Keith:
My cluster has 8 slots (cores)/node in the form of two quad-core
processors. Only recently we've started running jobs on it that
require
12 slots. We've noticed significant speed problems running
multiple 12
slot jobs, and quickly disc
On Tue, 22 Jul 2008, Peter St. John wrote:
Fair enough, I'll settle for Gary Oldman. We'll let RGB have Anthony
Hopkins.
No, no, no. John Malkovitch.
The resemblance is actually fairly striking. Bald, pudgy, whiny
sardonic voice, sexy as all hell. Might even fool my wife...;-)
rgb
Pe
On Tue, 22 Jul 2008, Greg Lindahl wrote:
On Tue, Jul 22, 2008 at 10:54:47AM -0400, Bob Drzyzgula wrote:
It is not even certain that the default, base install of a Linux
system will include Emacs
This just indicates a conspiracy of vi users. Or, more likely,
vi users complained that emacs was
My cluster has 8 slots (cores)/node in the form of two quad-core
processors. Only recently we've started running jobs on it that require
12 slots. We've noticed significant speed problems running multiple 12
slot jobs, and quickly discovered that the node that was running 4 slots
on one job and 4
On Sun, 2008-07-20 at 21:20 -0400, Joe Landman wrote:
> Greg Lindahl wrote:
> That said, we like IPMI in general, and even better when it works :(
> Sometimes it does go south, in a hurry (gets into a strange state). In
> which case, removing power is the only option.
Agreed. Currently my most
Dear all,
I have a problem with a selfwritten program on my small cluster. The cluster
nodes are PIII 500/800 MHz machines, the /home is distributed via NFS from a
PIII 1 GHz machine. All nodes are running on Debian Etch. The program in
question (polymc_s) is in the users /home directory and is
A graduate student at Purdue did research into this topic. He
presented his work at the 2007 Linux Cluster Institute conference and
the professor he works with still uses the same technique to
dynamically add or remove nodes from his cluster.
His paper can be found at:
http://www.linuxcluster
[snip]
Generally speaking, if you have a large cluster, and you have enough
work for it, it is going to be running flat out 24x7. If it isn't,
you've bought more hardware than you need.
There are many cases where a cluster is not used for continuously for
calculations, but rather to reduce tu
Thanks for your suggestions. Let me be more specific.
I would like to have nodes automatically wake up when
needed and go to sleep when idle for some time. My
ganglia logs tell me that there is considerable idle
time on our cluster. The issue is that I would like to
have the cluster adapt *automati
22 matches
Mail list logo