On Thu, Jun 11, 2009 at 7:48 PM, martin f krafft<madd...@debian.org> wrote: > Have you ever modified the partition table to increase the size of > the partition? >
yes I replaced the two 40 GB Disks with 2 80GB Disks > Have you made a backup? > no > Does it work if you unmount /var and then try > > mdadm --grow --size=max /dev/md6 > > and then xfs_grow or whatever it's called for XFS? > I ran mdadm --grow --size=max /dev/md6 without unmounting /var and the array showed the right size. # mdadm -D /dev/md6 /dev/md6: Version : 00.90.01 Creation Time : Thu Jan 11 22:49:09 2007 Raid Level : raid1 Array Size : 67159616 (64.05 GiB 68.77 GB) Device Size : 67159616 (64.05 GiB 68.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 6 Persistence : Superblock is persistent Update Time : Fri Jun 12 15:04:17 2009 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : f6debdab:76d34e2d:7e8db578:e8db3dc4 Events : 0.18775645 Number Major Minor RaidDevice State 0 3 8 0 active sync /dev/hda8 1 22 8 1 active sync /dev/hdc8 then I ran #xfs_growfs /var with /var mounted and now the file system also shows the correct size. :-) ~# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 953M 519M 435M 55% / tmpfs 237M 0 237M 0% /lib/init/rw tmpfs 237M 0 237M 0% /dev/shm /dev/md1 221M 11M 199M 6% /boot /dev/md4 953M 39M 914M 5% /home /dev/md5 1.9G 1.6M 1.9G 1% /tmp /dev/md3 4.7G 1.1G 3.7G 23% /usr /dev/md6 65G 23G 42G 36% /var THANKS A MILLION MARTIN :-))))))))))) --Siju -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org