Apologies for the late reply!!

Never mind, we figured it out,

Since we are doing cloning of VM, at the source(from where cloning is
initiated) we are redirecting the disk driver of the VM to a new
incremental image (qcow2). Also making the previous disk image as read
only, since the cloned VM will be using this read only image as the backing
file.

In a nutshell the backing file relation is as follows, with the entire
setup in NFS:

---src-VM---
<<before clonig>>
src_disk_driver->../src/rubis_ws.img ----> /home/shashaa/setup/rubis_ws.img
(base image)

<<after cloning>>
src_disk_driver../src/rubis_ws.img_clone ---->
/home/shashaa/setup/parent/rubis_ws.img

/home/shashaa/setup/parent/rubis_ws.img --> making it read only image

 ---Cloned-VM---
 ../cloned/rubis_ws.img -----> /home/shashaa/setup/parent/rubis_ws.img

So the actual issue was with re-directing of disk driver to the newly
created image "../src/rubis_ws.img_clone", where we are closing the driver
entries for old image and re-opening with the new one. Setting the correct
flags while opening the disk driver for
new image is the actual root cause.




Thanks,
-Shashaa
--Never let's schooling
 interfere with Education!!


On Wed, Apr 17, 2013 at 4:37 AM, Stefan Hajnoczi <[email protected]> wrote:

> On Tue, Apr 16, 2013 at 08:02:27PM -0400, Shashaankar Reddy wrote:
> > We have taken qemu-kvm version 1.2 for developing a patch that would do
> > cloning of a VM using pre-copy based live VM migration in qemu-kvm code.
> >
> > After the migration is completed, we could see the cloned_Vm is up and
> > running, also handled network configuration (assigning new mac and Ip
> > addresses). But where as at the src_VM we see a strange error, when we
> > logged into the src_VM via vnc viewer:
> >
> >  ide: failed opcode was: unknown
> > hdd: task_in_intr: status=0x41 { DriveReady Error }
> >  hdd: task_in_intr: error=0x04 { DriveStatusError }
> >  ide: failed opcode was: unknown
> >  hdd: task_in_intr: status=0x41 { DriveReady Error }
> > hdd: task_in_intr: error=0x04 { DriveStatusError }
> >
> > --- these above messages are repeating infinitely making the CPU usage
> > of the VM to 95-99% and it would not receive any further request to
> > serve (we  have an rubis server inside the VM ). Though the VM is
> > accessible to the host network (we are able to ping the VM).
> >
> > Could any one of you help us in this issue?
>
> Looks like your migration approach breaks IDE device save/load.
>
> How can we help without seeing the code?
>
> Stefan
>

Reply via email to