Mel Gorman wrote:
> > I did manage to get a couple which were slightly worse, but nothing like as
> > bad as before. Here are the results:
> >
> > # grep -F '[k]' report | head -8
> > 45.60% qemu-kvm [kernel.kallsyms] [k] clear_page_c
> > 11.26% qemu-kvm [kernel.kallsyms]
Hi Mel,
Thank you for this series. I have applied on clean 3.6-rc5 and tested, and
it works well for me - the lock contention is (still) gone and
isolate_freepages_block is much reduced.
Here is a typical test with these patches:
# grep -F '[k]' report | head -8
65.20% qemu-kvm [ker
Hi. We run a cloud compute provider using qemu-kvm and macvtap and are keen
to find a paid contractor to fix a bug with unusably slow inbound networking
over macvtap.
We originally reported the bug in this thread (report copied below):
http://marc.info/?t=13451109862
We have also reproduce
Richard Davies wrote:
> Thank you for your latest patches. I attach my latest perf report for a slow
> boot with all of these applied.
For avoidance of any doubt, there is the combined diff versus 3.6.0-rc5
which I tested:
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 38b42e7..0
Hi Mel - thanks for replying to my underhand bcc!
Mel Gorman wrote:
> I see that this is an old-ish bug but I did not read the full history.
> Is it now booting faster than 3.5.0 was? I'm asking because I'm
> interested to see if commit c67fe375 helped your particular case.
Yes, I think 3.6.0-rc5
[ adding linux-mm - previously at http://marc.info/?t=13451150943 ]
Hi Rik,
Since qemu-kvm 1.2.0 and Linux 3.6.0-rc5 came out, I thought that I would
retest with these.
The typical symptom now appears to be that the Windows VMs boot reasonably
fast, but then there is high CPU use and load fo
Hi Rik,
Are there any more tests which I can usefully do for you?
I notice that 3.6.0-rc4 is out - are there changes from rc3 which are worth
me retesting?
Cheers,
Richard.
Richard Davies wrote:
> Rik van Riel wrote:
> > Can you get a backtrace to that _raw_spin_lock_irqsave, to see
Chris Webb wrote:
> I found that on my laptop, the single change of host kernel config
>
> -CONFIG_INTEL_IDLE=y
> +# CONFIG_INTEL_IDLE is not set
>
> is sufficient to turn transfers into guests from slow to full wire speed
I am not deep enough in this code to write a patch, but I wonder if
macvtap
Rik van Riel wrote:
> Can you get a backtrace to that _raw_spin_lock_irqsave, to see
> from where it is running into lock contention?
>
> It would be good to know whether it is isolate_freepages_block,
> yield_to, kvm_vcpu_on_spin or something else...
Hi Rik,
I got into a slow boot situation on 3
Troy Benjegerdes wrote:
> Is there a way to capture/reproduce this 'slow boot' behavior with
> a simple regression test? I'd like to know if it happens on a
> single-physical CPU socket machine, or just on dual-sockets.
Yes, definitely.
These two emails earlier in the thread give a fairly complet
Rik van Riel wrote:
> Richard Davies wrote:
> > Avi Kivity wrote:
> > > Richard Davies wrote:
> > > > I can trigger the slow boots without KSM and they have the same
> > > > profile, with _raw_spin_lock_irqsave and isolate_freepages_block at
> > >
Avi Kivity wrote:
> Richard Davies wrote:
> > Avi Kivity wrote:
> > > Richard Davies wrote:
> > > > I can trigger the slow boots without KSM and they have the same
> > > > profile, with _raw_spin_lock_irqsave and isolate_freepages_block at
> > >
Avi Kivity wrote:
> Richard Davies wrote:
> > Below are two 'perf top' snapshots during a slow boot, which appear to
> > me to support your idea of a spin-lock problem.
...
> >PerfTop: 62249 irqs/sec kernel:96.9% exact: 0.0% [4000
Avi Kivity wrote:
> Richard Davies wrote:
> > I can trigger the slow boots without KSM and they have the same profile,
> > with _raw_spin_lock_irqsave and isolate_freepages_block at the top.
> >
> > I reduced to 3x 20GB 8-core VMs on a 128GB host (rather than 3x 40GB 8
Rik van Riel wrote:
> Richard Davies wrote:
> > I've now triggered a very slow boot at 3x 36GB 8-core VMs on a 128GB
> > host (i.e. 108GB on a 128GB host).
> >
> > It has the same profile with _raw_spin_lock_irqsave and
> > isolate_freepages_block at the to
Avi Kivity wrote:
> Richard Davies wrote:
> > We're running host kernel 3.5.1 and qemu-kvm 1.1.1.
> >
> > I hadn't though about it, but I agree this is related to cpu overcommit. The
> > slow boots are intermittent (and infrequent) with cpu overcommit wherea
Avi Kivity wrote:
> Richard Davies wrote:
> > Hi Avi,
> >
> > Thanks to you and several others for offering help. We will work with Avi at
> > first, but are grateful for all the other offers of help. We have a number
> > of other qemu-related projects which we
Brian Jackson wrote:
> Richard Davies wrote:
> > The host in question has 128GB RAM and dual AMD Opteron 6128 (16 cores
> > total). It is running kernel 3.5.1 and qemu-kvm 1.1.1.
> >
> > In this morning's test, we have 3 guests, all booting Windows with 40GB RAM
&g
Avi Kivity wrote:
> Richard Davies wrote:
> > The host in question has 128GB RAM and dual AMD Opteron 6128 (16 cores
> > total). It is running kernel 3.5.1 and qemu-kvm 1.1.1.
> >
> > In this morning's test, we have 3 guests, all booting Windows with 40GB RAM
&g
Hi Robert,
Robert Vineyard wrote:
> Not sure if you've tried this, but I noticed massive performance
> gains (easily booting 2-3 times as fast) by converting from RAW disk
> images to direct-mapped raw partitions and making sure that IOMMU
> support was enabled in the BIOS and in the kernel at boo
Hi Avi,
Thanks to you and several others for offering help. We will work with Avi at
first, but are grateful for all the other offers of help. We have a number
of other qemu-related projects which we'd be interested in getting done, and
will get in touch with these names (and anyone else who comes
Just to be clear - we are sure that it is the virtualization host networking
problems which are triggering this VM OS nic driver crash, and we haven't
described these here anything like well enough to reproduce the situation -
they are complex.
However, in the same situation, a VM with rtl8139 or
Hi,
We run a cloud hosting provider using qemu-kvm 1.1, and are keen to find a
contractor to track down and fix problems we have with large memory Windows
guests booting very slowly - they can take several hours.
We previously reported these problems in July (copied below) and they are
still pres
Stefan Hajnoczi wrote:
> > Chris and Richard: Please test this to confirm that it fixes the hang you
> > reported.
...
> Ping?
We never explicitly said, but yes v2 does fix the hang for us, like v1 did.
We are certainly +1 for this going into qemu 1.1.
Thanks,
Richard.
Stefan Hajnoczi wrote:
> > Hi. We were producing the IDE assert()s and deadlocks with linux kernels.
> > Although I believe the same symptoms exist on windows, I haven't actually
> > tested it myself. Typically they would show up in the 16-bit bootloader
> > code, even before the 32-bit OS has star
Hi,
I've been following the evolution of this patch with great interest for use
in our qemu-kvm based IaaS public cloud.
I am not a qemu developer, but have watched this patch go through many
rounds of review and we are very much hoping that it makes it into QEMU 1.0.
In a multi-customer multi-V
26 matches
Mail list logo