Control: tag - + moreinfo

21.07.2013 07:13, Craig Sanders wrote:
Package: seabios
Version: 1.7.3-1

Hello Craig!  Do you remember me? :)

I'm sorry this took so long.  Apparently I weren't subscribed to
seabios bugreports so I never knew this bugreport has been submitted.
Just by a chance I looked at the seabios bug page and found this.
So replying to the bugreport almost half a year later...

my zfs test VM boots without a problem if it has seven disks (1x5GB
boot/OS zvol, 6 x 200M files) or less.

It still works if I boot with 7 disks and then use 'virsh attach-disk'
to add another virtio disk (or five. or ten). the added drives appear in
the system and i can use them without any problem, including adding them
to my test zpool.

rebooting the VM with more than seven disks attached causes it to lock
up at the BIOS screen, immediately after the "Booting from Hard Disk..."
message.

CPU utilisation of the qemu process at this point is about 90% (of one
core of a Phenom II 1090T), and it stays that way until i kill the VM
- i left one instance of the VM running overnight to see if it would
eventually get started (nope).

Interesting.

the only info i can find with google on block device limits suggests
that kvm has a limit of 4 IDE devices and 20 virtio block devices, from
an opensuse page:

http://doc.opensuse.org/documentation/html/openSUSE/opensuse-kvm/cha.kvm.limits.html#sec.kvm.limits.hardware

No, 20 virtio block devices is not a limit, you can have much more.
And it used to work too, at least in the past.

(I don't think 4 IDE deivces is a limit too, it might be possible
to add another IDE controller and have 8 devices and so on).

the fact that 'virsh attach-disk' works suggests that it's not a
kvm/qemu limitation, anyway.

Yes, it looks like someting is wrong either on seabios side or in
qemu when it transfers info about drives into the guest.

ps: i'm not really sure if this bug belongs to qemu or to seabios.
seabios seems most likely.

pps: this used to work in previous versions. another zfs testing VM that
I made early last year used to boot with nine virtio disks (vda ...
vdi). i last booted it a few months ago. it failed to boot yesterday
morning and i assumed it was a problem with the VM, so i created this
new ztest vm only to encounter the same problem when i added the extra
drives for the test pool.

Do you remember in which previous versions it worked?

And oh, what is the version of qemu(-kvm) do you use?  Care to show
qemu command line too?

The thing is: I can't reproduce this issue using a naive approach.
For example:

 qemu-system-x86_64 -enable-kvm -snapshot $(for x in $(seq 1 8); do echo -drive 
file=w0.raw,if=virtio; done)

(where w0.raw is some random linux bootable image).

I can go up to 28 images and it boots just fine.

Thank you!

/mjt





--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org

Reply via email to