On Fri, 19 May 2017 at 16:10, Jay Zhou wrote:
>
> Hi Paolo and Wanpeng,
>
> On 2017/5/17 16:38, Wanpeng Li wrote:
> > 2017-05-17 15:43 GMT+08:00 Paolo Bonzini :
> >>> Recently, I have tested the performance before migration and after
> >>> migration failure
> >>> using spec cpu2006 https://www.sp
Hi
qemu had a assert when we use scsi-3 reservation。
This happen when scsi sence is recoverd error。
And which lead scsi_req_complete twice.
static bool scsi_handle_rw_error(SCSIDiskReq *r, int error, bool acct_failed)
{
bool is_read = (r->req.cmd.mode == SCSI_XFER_FROM_DEV);
SCSIDiskState
On 12/10/2018 10:05, Wangguang wrote:
> Hi
>
> qemu had a assert when we use scsi-3 reservation。
>
> This happen when scsi sence is recoverd error。
>
> And which lead scsi_req_complete twice.
>
>
>
>
>
> static bool scsi_handle_rw_error(SCSIDiskReq *r, int error, bool
> acct_failed)
>
>
Am 07.11.2017 um 08:18 hat wang.guan...@zte.com.cn geschrieben:
> hello
>
>
> if we create a qcow2 file on a block dev.
>
>
> we can,t get the right disk size by qemu-img info。
>
>
>
>
>
>
> [root@host-120-79 qemu]# ./qemu-img create -f qcow2 /dev/zs/lvol0 1G
>
> Formatting '/dev/zs/lvol
hello
if we create a qcow2 file on a block dev.
we can,t get the right disk size by qemu-img info。
[root@host-120-79 qemu]# ./qemu-img create -f qcow2 /dev/zs/lvol0 1G
Formatting '/dev/zs/lvol0', fmt=qcow2 size=1073741824 cluster_size=65536
lazy_refcounts=off refcount_bits=16
[root@hos
Hi Xiao,
On 2017/5/19 16:32, Xiao Guangrong wrote:
I do not know why i was removed from the list.
I was CCed to you...
Your comments are very valuable to us, and thank for your quick response.
On 05/19/2017 04:09 PM, Jay Zhou wrote:
Hi Paolo and Wanpeng,
On 2017/5/17 16:38, Wanpeng Li wr
I do not know why i was removed from the list.
On 05/19/2017 04:09 PM, Jay Zhou wrote:
Hi Paolo and Wanpeng,
On 2017/5/17 16:38, Wanpeng Li wrote:
2017-05-17 15:43 GMT+08:00 Paolo Bonzini :
Recently, I have tested the performance before migration and after migration
failure
using spec cpu2
Hi Paolo and Wanpeng,
On 2017/5/17 16:38, Wanpeng Li wrote:
2017-05-17 15:43 GMT+08:00 Paolo Bonzini :
Recently, I have tested the performance before migration and after migration
failure
using spec cpu2006 https://www.spec.org/cpu2006/, which is a standard
performance
evaluation tool.
These
2017-05-17 15:43 GMT+08:00 Paolo Bonzini :
>> Recently, I have tested the performance before migration and after migration
>> failure
>> using spec cpu2006 https://www.spec.org/cpu2006/, which is a standard
>> performance
>> evaluation tool.
>>
>> These are the steps:
>> ==
>> (1) the versio
> Recently, I have tested the performance before migration and after migration
> failure
> using spec cpu2006 https://www.spec.org/cpu2006/, which is a standard
> performance
> evaluation tool.
>
> These are the steps:
> ==
> (1) the version of kmod is 4.4.11(with slightly modified) and the
On 2017/5/17 13:47, Wanpeng Li wrote:
Hi Zhoujian,
2017-05-17 10:20 GMT+08:00 Zhoujian (jay) :
Hi Wanpeng,
On 11/05/2017 14:07, Zhoujian (jay) wrote:
-* Scan sptes if dirty logging has been stopped, dropping those
-* which can be collapsed into a single large-page spte. Lat
Hi Zhoujian,
2017-05-17 10:20 GMT+08:00 Zhoujian (jay) :
> Hi Wanpeng,
>
>> > On 11/05/2017 14:07, Zhoujian (jay) wrote:
>> >> -* Scan sptes if dirty logging has been stopped, dropping those
>> >> -* which can be collapsed into a single large-page spte. Later
>> >> -* page
Hi Wanpeng,
> > On 11/05/2017 14:07, Zhoujian (jay) wrote:
> >> -* Scan sptes if dirty logging has been stopped, dropping those
> >> -* which can be collapsed into a single large-page spte. Later
> >> -* page faults will create the large-page sptes.
> >> +* Reset e
On 2017/5/12 16:09, Xiao Guangrong wrote:
On 05/11/2017 08:24 PM, Paolo Bonzini wrote:
On 11/05/2017 14:07, Zhoujian (jay) wrote:
-* Scan sptes if dirty logging has been stopped, dropping those
-* which can be collapsed into a single large-page spte. Later
-* page fau
On 05/11/2017 08:24 PM, Paolo Bonzini wrote:
On 11/05/2017 14:07, Zhoujian (jay) wrote:
-* Scan sptes if dirty logging has been stopped, dropping those
-* which can be collapsed into a single large-page spte. Later
-* page faults will create the large-page sptes.
+
2017-05-11 22:18 GMT+08:00 Zhoujian (jay) :
> Hi Wanpeng,
>
>> 2017-05-11 21:43 GMT+08:00 Wanpeng Li :
>> > 2017-05-11 20:24 GMT+08:00 Paolo Bonzini :
>> >>
>> >>
>> >> On 11/05/2017 14:07, Zhoujian (jay) wrote:
>> >>> -* Scan sptes if dirty logging has been stopped, dropping
>> those
>> >>
Hi Wanpeng,
> 2017-05-11 21:43 GMT+08:00 Wanpeng Li :
> > 2017-05-11 20:24 GMT+08:00 Paolo Bonzini :
> >>
> >>
> >> On 11/05/2017 14:07, Zhoujian (jay) wrote:
> >>> -* Scan sptes if dirty logging has been stopped, dropping
> those
> >>> -* which can be collapsed into a single large
Hi all,
After applying the patch below, the time which
memory_global_dirty_log_stop() function takes is down to milliseconds
of a 4T memory guest, but I'm not sure whether this patch will trigger
other problems. Does this patch make sense?
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
inde
2017-05-11 21:43 GMT+08:00 Wanpeng Li :
> 2017-05-11 20:24 GMT+08:00 Paolo Bonzini :
>>
>>
>> On 11/05/2017 14:07, Zhoujian (jay) wrote:
>>> -* Scan sptes if dirty logging has been stopped, dropping those
>>> -* which can be collapsed into a single large-page spte. Later
>>> -
2017-05-11 20:24 GMT+08:00 Paolo Bonzini :
>
>
> On 11/05/2017 14:07, Zhoujian (jay) wrote:
>> -* Scan sptes if dirty logging has been stopped, dropping those
>> -* which can be collapsed into a single large-page spte. Later
>> -* page faults will create the large-page spte
On 11/05/2017 14:07, Zhoujian (jay) wrote:
> -* Scan sptes if dirty logging has been stopped, dropping those
> -* which can be collapsed into a single large-page spte. Later
> -* page faults will create the large-page sptes.
> +* Reset each vcpu's mmu, then page f
Hi Paolo, Dave,
On 2017/4/26 23:46, Paolo Bonzini wrote:
>
>
> On 24/04/2017 18:42, Dr. David Alan Gilbert wrote:
>> I suppose there's a few questions;
>> a) Do we actually need the BQL - and if so why
Enable/disable dirty log tracking are operations on memory regions.
That's why they need to
On 24/04/2017 18:42, Dr. David Alan Gilbert wrote:
> I suppose there's a few questions;
> a) Do we actually need the BQL - and if so why
> b) What actually takes 13s? It's probably worth figuring
> out where it goes, the whole bitmap is only 1GB isn't it
> even on a 4TB machine, and even th
* Yang Hongyang (yanghongy...@huawei.com) wrote:
>
>
> On 2017/4/24 20:06, Juan Quintela wrote:
> > Yang Hongyang wrote:
> >> Hi all,
> >>
> >> We found dirty log switch costs more then 13 seconds while migrating
> >> a 4T memory guest, and dirty log switch is currently protected by QEMU
> >> BQ
On 2017/4/24 20:06, Juan Quintela wrote:
> Yang Hongyang wrote:
>> Hi all,
>>
>> We found dirty log switch costs more then 13 seconds while migrating
>> a 4T memory guest, and dirty log switch is currently protected by QEMU
>> BQL. This causes guest freeze for a long time when switching dirty lo
Yang Hongyang wrote:
> Hi all,
>
> We found dirty log switch costs more then 13 seconds while migrating
> a 4T memory guest, and dirty log switch is currently protected by QEMU
> BQL. This causes guest freeze for a long time when switching dirty log on,
> and the migration downtime is unacceptable
Hi all,
We found dirty log switch costs more then 13 seconds while migrating
a 4T memory guest, and dirty log switch is currently protected by QEMU
BQL. This causes guest freeze for a long time when switching dirty log on,
and the migration downtime is unacceptable.
Are there any chance to optimiz
Hi ,
Sorry in advance that I am not sure that if I ask this question here is
appropriate. I am new to use KVM and want to do some research in qemu-kvm
live migration.
In my environment, I install centos 7.0 with rpm install qemu-kvm
1.5.3, but what I want to do research is the source code
On Wed, Oct 30, 2013 at 2:43 AM, Antony Pavlov wrote:
> On Tue, 29 Oct 2013 21:09:30 +0800
> Nancy wrote:
>
> Some years ago I have made a set of scripts for building MIPS linux kernel and
> rootfs from scratch and running it under qemu.
> See https://github.com/frantony/clab for details,
> espec
On Tue, 29 Oct 2013 21:09:30 +0800
Nancy wrote:
Some years ago I have made a set of scripts for building MIPS linux kernel and
rootfs from scratch and running it under qemu.
See https://github.com/frantony/clab for details,
especialy see start-qemu.sh script and files in the qemu-configs/malta-
Hi,
1. When QEMU for MIPS support watchpoint debug facility? Any hint or guide
to implement that function?
2. qemu-system-mipsel -M malta -kernel vmlinux-2.6.26-1-4kc-malta -hda
debian_lenny_mipsel_small.qcow2 -append "root=/dev/hda1 console=ttyS0
kgdboc=ttyS0,115200 kgdbwait" -nographic -serial
Am 28.10.2010 04:20, schrieb Zhiyuan Shao:
> OK, If I get some time in the close future, I will try to improve the
> relevant part (todo list: PAE/PSE(36), IDT, GDT, x86_64, possibly
> pipe-like feature) of Qemu that I think it will be helpful for people
> debugging code on the i386 platform.
>
>
On Wed, 2010-10-27 at 20:07 +, Blue Swirl wrote:
> On Wed, Oct 27, 2010 at 1:10 AM, Zhiyuan Shao wrote:
> > On Tue, 2010-10-26 at 18:59 +, Blue Swirl wrote:
> >> On Tue, Oct 26, 2010 at 12:22 PM, Zhiyuan Shao wrote:
> >> > Hi team,
> >> >
> >> > I am a Qemu User, and using Qemu 0.13.0 to
On Wed, Oct 27, 2010 at 1:10 AM, Zhiyuan Shao wrote:
> On Tue, 2010-10-26 at 18:59 +, Blue Swirl wrote:
>> On Tue, Oct 26, 2010 at 12:22 PM, Zhiyuan Shao wrote:
>> > Hi team,
>> >
>> > I am a Qemu User, and using Qemu 0.13.0 to debugging the linux kernel
>> > code (Qemu+GDB).
>> >
>> > During
On Tue, 2010-10-26 at 18:59 +, Blue Swirl wrote:
> On Tue, Oct 26, 2010 at 12:22 PM, Zhiyuan Shao wrote:
> > Hi team,
> >
> > I am a Qemu User, and using Qemu 0.13.0 to debugging the linux kernel
> > code (Qemu+GDB).
> >
> > During the usage, I found the Qemu debugging console (i.e., entered b
On Tue, Oct 26, 2010 at 12:22 PM, Zhiyuan Shao wrote:
> Hi team,
>
> I am a Qemu User, and using Qemu 0.13.0 to debugging the linux kernel
> code (Qemu+GDB).
>
> During the usage, I found the Qemu debugging console (i.e., entered by
> pressing Ctl+Alt+2 in Qemu SDL window or by passing "-monitor s
Hi team,
I am a Qemu User, and using Qemu 0.13.0 to debugging the linux kernel
code (Qemu+GDB).
During the usage, I found the Qemu debugging console (i.e., entered by
pressing Ctl+Alt+2 in Qemu SDL window or by passing "-monitor stdio" to
Qemu in the command line) is rather difficult to use. It
On Tue, Aug 10, 2010 at 12:17, chandra shekar
wrote:
> can any one suggest any study materials for start learning qemu and its
> internals
> i have already read the documentation in this qemu web page other than that
> any
> other materials,thanks
>
I am afraid, the other thing left to try is ju
can any one suggest any study materials for start learning qemu and its
internals
i have already read the documentation in this qemu web page other than that
any
other materials,thanks
start from vl.c, main()
-mj
On Wed, Aug 4, 2010 at 10:29 AM, chandra shekar
wrote:
> hi iam chandra i am interested in understanding the qemu code can any one
> help me from where i have to start
> and also i installed qemu on ubuntu 10.04 after installing when i run qemu
> as per instruction g
hi iam chandra i am interested in understanding the qemu code can any one
help me from where i have to start
and also i installed qemu on ubuntu 10.04 after installing when i run qemu
as per instruction given in qemu web page
it says vnc server running on some ip and thats it can please some one h
Hi
I created a full virtualized DOM-U by XEN which is emulated by qemu-dm,
and I dont know hot to monitor it's disk and network in DOM-0.
I know I can press CTRL-ALT-2 to access the monitor, but it should be
done in dom-U.
Is there any solution to get the disk and network information of dom-U
in
Hi
I created a full virtualized DOM-U by XEN which is emulated by qemu-dm,
and I dont know hot to monitor it's disk and network in DOM-0.
I know I can press CTRL-ALT-2 to access the monitor, but it should be
done in dom-U.
Is there any solution to get the disk and network information of dom-U
in
> I take it self-modifying kernel code would have serious issues.
Seems likely :-) With hardware support, making things like this work should
be *much* easier.
> I seem to recall my attempts to run v2OS (which uses a self-modifying
> assembly code boot sequence) inside VMWare crashing badly cir
> VMware handles kernel code. You are right that x86 code can't be 100%
> virtualized
> (even at the userland level) but VMware uses a lot of nasty disgusting tricks
> in order to work around them. (For example, playing with shadow pagetables
> so that a page of modified code is run but if the cod
On Wed, Sep 14, 2005 at 10:18:24AM -0700, John R. Hogerhuis wrote:
> Why disgusting?
>
> Perhaps you meant disgusting because the Intel architecture forces a
> virtualizer to handle a bunch of corner cases like this.
>
That is exactly what I mean.
> -- John.
>
--
Infinite complexity begets
On Wed, Sep 14, 2005 at 01:46:58PM -0500, Anthony Liguori wrote:
> You can't readahead beyond a basic block. Taking a trap for each basic
> block and translating the block is what QEMU does.
>
No, QEMU translates everything from guest machine code into its internal codes.
I'm talking about usi
Jim C. Brown wrote:
On Tue, Sep 13, 2005 at 11:27:39PM -0500, Anthony Liguori wrote:
I reckon kqemu has this same problem... Technically, even in ring 3, if
you run natively, you violate the Popek/Goldberg requirements because of
cpuid. It's just not possible to trap it but it shouldn't ma
> > There are a couple of interesting paravirtualization techniques too.
> > There's the Xen approach (really fast, but very invasive), the L4ka
> > afterburning (theoritically close to as fast, but less invasive), and
> > then of course the extremes like UML.
>
> Not familar with L4ka. I don't bel
Two side footnotes to your comprehensive explanation:
1) with the SKAS host kernel patch you don't have to ptrace the "guest"
processes and performance (and security) is improved quite a bit, I
understand.
2) UML is currently being ported to run in ring 0. Why? Not for running on
native hard
On Wed, 2005-09-14 at 09:37 -0400, Jim C. Brown wrote:
> VMware handles kernel code. You are right that x86 code can't be 100%
> virtualized
> (even at the userland level) but VMware uses a lot of nasty disgusting tricks
> in order to work around them. (For example, playing with shadow pagetables
On Wed, 14 Sep 2005, Jim C. Brown wrote:
Not familar with L4ka. I don't believe that UML does virtualization, it simply
runs linux code 'as is' but intercepts calls to the kernel.
UML does not do hardware virtualization. UML is a special architecture for
the Linux kernel allowing Linux to run
On Tue, Sep 13, 2005 at 11:27:39PM -0500, Anthony Liguori wrote:
> I reckon kqemu has this same problem... Technically, even in ring 3, if
> you run natively, you violate the Popek/Goldberg requirements because of
> cpuid. It's just not possible to trap it but it shouldn't matter for
> most sof
On Tue, Sep 13, 2005 at 09:48:01PM -0500, Anthony Liguori wrote:
> Jim C. Brown wrote:
>
> The x86 cannot be "virtualized" in the Popek/Goldberg sense, so there's
> a couple of fast emulation techniques that are possible. Other than a
> hand coded dynamic translator, I reckon qemu + kqemu is ab
Well, VMware guests can recognise that they're in a VM because the
software contains a backdoor INT function, mainly used by VMware Tools
for things like Shared Folders and host-controlled mouse cursors
insides guests. I don't quite remember what the function was for
VMware's backdoor, but you can
Mark Williamson wrote:
No, I got the impression that Fabrice was taking about virtualization the
way VMware, old plex86, and vmbear (new FOSS x86 virtualizer in the
works) do it.
The x86 cannot be "virtualized" in the Popek/Goldberg sense, so there's
a couple of fast emulation techniques
> >No, I got the impression that Fabrice was taking about virtualization the
> > way VMware, old plex86, and vmbear (new FOSS x86 virtualizer in the
> > works) do it.
>
> The x86 cannot be "virtualized" in the Popek/Goldberg sense, so there's
> a couple of fast emulation techniques that are possibl
Jim C. Brown wrote:
On Tue, Sep 13, 2005 at 09:58:11AM -0500, Anthony Liguori wrote:
Jim C. Brown wrote:
Fabrice had said that he > >wants
kqemu to be able to do total virtualization (both kernel and userland > >bits);
basically all the translation code of qemu would be left unused bu
> No, I got the impression that Fabrice was taking about virtualization the
> way VMware, old plex86, and vmbear (new FOSS x86 virtualizer in the works)
> do it.
>
> So it'll work w/o needing a 64bit chip.
I hadn't seen vmbear, looks interesting... Full virtualisation on vanilla x86
would be rea
On Tue, Sep 13, 2005 at 09:58:11AM -0500, Anthony Liguori wrote:
> Jim C. Brown wrote:
>
> >Fabrice had said that he > >wants
> >kqemu to be able to do total virtualization (both kernel and userland >
> >>bits);
> >basically all the translation code of qemu would be left unused but the
> >hardwa
On 9/13/05, Adrian Smarzewski <[EMAIL PROTECTED]> wrote:
> Alexandre Leclerc wrote:
> > I'm new to qemu and my question is simple and is probably due to my
> > ignorance. If I compare qemu and vmware, there is a great deal of
> > emulation speed differences.
>
> Did you try kqemu or qvm86?
Yes, w
Jim C. Brown wrote:
- If no, is it possible that one day qemu reaches the speed of vmware?
qemu itself? Nope.
kqemu/qvm86 don't have this limitation though. Fabrice had said that he wants
kqemu to be able to do total virtualization (both kernel and userland bits);
basically all the tran
On Tue, Sep 13, 2005 at 08:36:29AM -0400, Alexandre Leclerc wrote:
> Hi all,
>
> I'm new to qemu and my question is simple and is probably due to my
> ignorance. If I compare qemu and vmware, there is a great deal of
> emulation speed differences.
>
> - Is it because of what qemu is? (i.e. it is
Alexandre Leclerc wrote:
I'm new to qemu and my question is simple and is probably due to my
ignorance. If I compare qemu and vmware, there is a great deal of
emulation speed differences.
Did you try kqemu or qvm86?
--
Pozdrowienia,
Adrian Smarzewski
_
Hi all,
I'm new to qemu and my question is simple and is probably due to my
ignorance. If I compare qemu and vmware, there is a great deal of
emulation speed differences.
- Is it because of what qemu is? (i.e. it is a full emulator of many
platforms, etc. Meaning that vmware is probably only spec
Well, as far as I can see, you're passing the RAW DEVICE NODE as the
root partition instead of the numbered partition convention.
Instead of passing root=/dev/hda, try something like root=/dev/hda1
I hope that helps.
p.s.: Next time, please, take your time to read what you're doing
before compla
hii have read the pdf file:Embedded Linux kernel and driver developm Training lab book.pdf. at the page 7,when i boot kernel. " qemu -m 32 -kernel /lab/linux-2.6.11.11/arch/i386/boot/bzImage -append "clock=pit root=/dev/hda" -hda /lab/linux/lab1/data/linux_i386.img -boot c" i meet the information
67 matches
Mail list logo