Indeed, finally, I created on the second disk of the mirror, with fdisk,
the same partition (solaris 2, 100%, bootable) as on the first disk,
then I copied the vtoc using prtvtoc anf fmthard, then used zpool
replace and zpool clean and everything is now OK, according to zpool
status. It was the fdisk phase that was missing in my original procedure.
Did the resilvering also copy the booting stuff or should I run
installgrub ?
Thank you all for your help
Marc
On 1/3/23 03:40, Reginald Beardsley via openindiana-discuss wrote:
Partition the disk so you have a partition that matches the size of your
other disks in the pool. Then mount that partition.
On Monday, January 2, 2023, 08:31:38 PM CST, Marc Lobelle
<[email protected]> wrote:
Hello,
First, Best wished for 2023 to everybody !
I tried to add a second 480Gb disk to my rpool (different manufacturer
and slightly larger)
Below is what I did, but apparently there is a problem and the case
(adding a mirror disk) is not discussed in
https://illumos.org/msg/ZFS-8000-4J/
Anybody has an idea on how to solve this issue ? I did not yet
installgrub on the new disk because I fear it would make things worse.
Thanks
Marc
ml@spitfire:/home/ml# zpool status rpool
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 0 days 00:05:41 with 0 errors on Wed Dec 28
16:43:12 2022
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c7d0s0 ONLINE 0 0 0
errors: No known data errors
ml@spitfire:/home/ml# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c5d0 <Samsung SSD 870 QVO
4TB=S5STNF0T406608A-S5STNF0T406608A-0001-3.64TB>
/pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
1. c6d0 <Samsung SSD 870 QVO
4TB=S5STNF0T406604J-S5STNF0T406604J-0001-3.64TB>
/pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
2. c7d0 <Unknown-Unknown-0001 cyl 58366 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
3. c8d0 <Unknown-Unknown-0001 cyl 58367 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0
Specify disk (enter its number): ^C
ml@spitfire:/home/ml# prtvtoc /dev/rdsk/c7d0s0
* /dev/rdsk/c7d0s0 partition map
*
* Dimensions:
* 512 bytes/sector
* 63 sectors/track
* 255 tracks/cylinder
* 16065 sectors/cylinder
* 58368 cylinders
* 58366 accessible cylinders
* 937681920 sectors
* 937649790 accessible sectors
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 0 16065 16064
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 16065 937633725 937649789
2 5 01 0 937649790 937649789
8 1 01 0 16065 16064
ml@spitfire:/home/ml# prtvtoc /dev/rdsk/c7d0s0|fmthard -s - /dev/rdsk/c8d0s0
fmthard: Partition 2 specifies the full disk and is not equal
full size of disk. The full disk capacity is 937665855 sectors.
fmthard: New volume table of contents now in place.
ml@spitfire:/home/ml# zpool attach -f rpool c7d0s0 c8d0s0
Make sure to wait until resilver is done before rebooting.
ml@spitfire:/home/ml# zpool status rpool
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jan 2 11:59:05 2023
56,4G scanned at 2,56G/s, 909M issued at 41,3M/s, 56,4G total
910M resilvered, 1,57% done, 0 days 00:22:55 to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c7d0s0 ONLINE 0 0 0
c8d0s0 ONLINE 0 0 0 (resilvering)
errors: No known data errors
ml@spitfire:/home/ml# zpool status rpool
pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is
missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-4J
scan: resilvered 34,6G in 0 days 00:06:23 with 0 errors on Mon Jan 2
12:05:28 2023
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c7d0s0 ONLINE 0 0 0
c8d0s0 UNAVAIL 0 57,4K 0 corrupted data
errors: No known data errors
ml@spitfire:/home/ml#
_______________________________________________
openindiana-discuss mailing list
[email protected]
https://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
openindiana-discuss mailing list
[email protected]
https://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________
openindiana-discuss mailing list
[email protected]
https://openindiana.org/mailman/listinfo/openindiana-discuss