Hi David,
On Tuesday 26 January 2010 19:46:40 David Mathog wrote:
> The default log level on these machines is 3. If the kernel panics with
> it set to that, will the messages that result be "contentless", like the
> ones above?
Try
dmesg -n 8
to raise the logging level and try
echo '<7>Davi
chris not only the vm being portable yes you would take a hit yet from my
research into xen it seems like the paid version of citrix xen server has
some other nice features such as migration to a back up machine in case of
hardware failure.
when you all say performance hit how much of a hit are we
One of the virtualization trends I do see in HPC/clustering is in the
area of packaging up entire scientific applications into their own
custom VMs which contain all the necessary libraries, software
dependencies etc.
There is a performance hit now and implementation is clunky but I can
see
Is it just me, or does HPC clustering and virtualization fall on
opposite ends of the spectrum?
depends on your definitions. virtualization certainly conflicts with
those aspects of HPC which require bare-metal performance. even if you
can reduce the overhead of virtualization, the question is
do you guys think that virtualized clustering is the future?
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Ashley Pittman wrote:
On 25 Jan 2010, at 15:28, Jonathan Aquilina wrote:
has anyone tried clustering using xen based vm's. what is everyones take on
that? its something that popped into my head while in my lectures today.
I've been using Amazon ec2 for clustering for months now, from
> David Mathog wrote:
> Will a logger
> message for "kern" test it, or is there some other way to force a
> printk? I'm afraid the logger method might look like it is working, but
> just go through the usual syslog channels instead of netconsole.
Too optimistic. With netconsole (supposedly) runnin
>
> Is it just me, or does HPC clustering and virtualization fall on
> opposite ends of the spectrum?
>
Gavin, not necessarily. You could have a cluster of HPC compute nodes
running a minimal base OS.
Then install specific virtual machines with different OS/software stacks
each time your run a j
Henning Fehrmann wrote:
> We loaded the netconsole module. This works at least for the
> 2.6.27 kernel. AFAIK for older kernel one has to compile it into the
kernel.
Ah good idea, and this distro already has that, but it isn't enabled by
default. I see how to configure it and turn it on. Will
john i thank you for the encouragement. its better then what i get form
certain people i deal with in ubuntu channels. you mention diskless booting
using tftp and pxe. the problem though arises when u have a certain number
of nodes accessing the same disk simultaneously where disk I/O shoots
throug
for starters to save on resourses why not cut out the gui and go commandline to
free up some more of the shared resources, and 2ndly wouldnt offloading data
storage to a san or nfs storage server mitigate the disk I/O issues?
i honestly dont know much about xen as i just got my hands dirty with
On Tue, 2010-01-26 at 13:24 +, Tim Cutts wrote:
> 1) Applications with I/O patterns of large numbers of small disk
> operations are particularly painful (such as our ganglia server with
> all its thousands of tiny updates to RRD files). We've mitigated this
> by configuring Linux on th
for starters to save on resourses why not cut out the gui and go commandline
to free up some more of the shared resources, and 2ndly wouldnt offloading
data storage to a san or nfs storage server mitigate the disk I/O issues?
i honestly dont know much about xen as i just got my hands dirty with it
On 26 Jan 2010, at 1:24 pm, Tim Cutts wrote:
2) Raw device maps (where you pass a LUN straight through to a
single virtual machine, rather than carving the disk out of a
datastore) reduce contention and increase performance somewhat, at
the cost of using up device minor numbers on ESX qui
On 26 Jan 2010, at 12:00 pm, Jonathan Aquilina wrote:
does anyone have any benchmarks for I/O in a virtualized cluster?
I don't have formal benchmarks, but I can tell you what I see on my
VMware virtual machines in general:
Network I/O is reasonably fast - there's some additional latency,
does anyone have any benchmarks for I/O in a virtualized cluster?
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Hi David,
On Mon, Jan 25, 2010 at 10:46:31AM -0800, David Mathog wrote:
> Is it possible to have the Machine Check Exception (MCE) information
> saved to disk automatically on the next warm boot?
>
> Long form:
>
> A K7 node crashed yesterday and left an MCE on the screen which I copied
> down
17 matches
Mail list logo