Package: sysvinit Version: 2.86.ds1-38 Severity: important
-- System Information: Debian Release: 4.0 APT prefers testing APT policy: (500, 'testing'), (500, 'stable') Architecture: amd64 (x86_64) Shell: /bin/sh linked to /bin/bash Kernel: Linux 2.6.18-4-amd64 Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Versions of packages sysvinit depends on: ii initscripts 2.86.ds1-38 Scripts for initializing and shutt ii libc6 2.3.6.ds1-13 GNU C Library: Shared libraries ii libselinux1 1.32-3 SELinux shared libraries ii libsepol1 1.14-2 Security Enhanced Linux policy lib ii sysv-rc 2.86.ds1-38 System-V-like runlevel change mech ii sysvinit-utils 2.86.ds1-38 System-V-like utilities sysvinit recommends no packages. -- no debconf information mdadm 2.5.6-9 Durning shutdown the raid drives are still busy and don't get shutdown correctly. This happens intermitently (but more than every other time) and requires a rebuild of the raid on start up. There is a potential for data loss here. Searching the web shows that others are having the same problem, and some say to just ignore -- but I have had system start up with one drive failed at times. http://groups.google.com/group/linux.debian.user/browse_thread/thread/79c89dab224a5a94/8a3aa98278a0c288?lnk=st&q=md+debian+raid+fails+unmount++shutdown&rnum=1&hl=en#8a3aa98278 According to : http://svn.debian.org/wsvn/pkg-mdadm/mdadm/trunk/debian/FAQ?op=file&rev=0&sc=0 > (One of) my RAID arrays is busy and cannot be stopped. What gives? > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is perfectly > normal for mdadm to report the array with the root filesystem to be busy on shutdown. > The reason for this is that the root filesystem must be mounted to be able to stop the > array (or otherwise /sbin/mdadm does not exist), but to stop the array, the root > filesystem cannot be mounted. Catch 22. The kernel actually stops the array just before > halting, so it's all well. If mdadm cannot stop other arrays on your system, check that > these arrays aren't used anymore. Common causes for busy/locked arrays are: * The > array contains a mounted filesystem (check the `mount' output) * The array is used as > a swap backend (check /proc/swaps) * The array is used by the device-mapper (check > with `dmsetup') * LVM * dm-crypt * EVMS * The array is used by a > process (check with `lsof') BUT - I don't think this information is current as I'm seeing rebuilds on reboot on two different etch boxes running amd64. Wonder if there needs to be a slight time delay to let the drives finish? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]