Hello, If booting on non-xen kernel, then no problems can be seen. But it is true, exactly the same asction cannot be tested, the xendomains script can be started only if running under xen and also there had to be some virtual instances to be suspended... If I boot with xen and then shut down the virtual instances, and then reboot the computer, the hangup does not occur. Only in the case the suspend during /etc/init.d/xendomains stop Happens, the crash comes after some time.
Under the non-xen I have also tested the cration of the larger amount of the data by the usage of dd or cp (of cca 5 GiB of data, which is 2 times more than all virtual instances together have memory which has to be written to the raid), but nothing strange happens, everything works. I even tried to write the files to the same location like xendomains writes the memory snapshots (it is on md1 raid, the system itself is installed on md0) but everything seems to be working fine without xen kernel. Finally I booted the xen kernel again and just tried to perform a heavy operations on the raid1 - I have generated the hangup induring some seconds and ater reboot again. In the fiorst case I have started just the dd of the 5 GB data bs=1M count=5000 on the first screen and then switched to the second screen and here simply started aptitude. In the second case (in this case the resync of the array from the previous crash was running) I have tried to start paralely two dd's on two screns. It was no problem for some time, then I tried to start aptitude in the third screen, it caused also nothing. I returnd back to screen 2 and pressed twice ctrl-C what lead to hangup of the system again. So, it seems to be very probable this problem has nothing special to do with xendomains script or any xen utility, but is just the question of running under xen kernel and performing more complex or heavy operations on the raid array... My configuration of the array is: 2 TB SATA disks, both split in the similar way to 1x50GB and 3x650GB raid-partitions, on the first one (md0 - the smallest) is the system, on md1 (size 650 GB) is the raid md1 (here I perform the write operation in the tests and here writes xendomains too) and third array md2 not involved in tests or problems. The fourth 650GB partitions are unused. Regards, Archie -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org