Posted July 19, 2024Jul 19 Please help, I am thinking I am screwed, but I don't understand some of these errors. Trying to reimport a pool, but get an "invalid label" How do I correct that error? Any chance of importing this pool in a read only state? Ive been unsuccessful thus far. status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: zfs UNAVAIL insufficient replicas raidz3-0 UNAVAIL insufficient replicas sdg1 ONLINE sdj ONLINE sdi UNAVAIL invalid label sdb1 ONLINE sdc ONLINE 6707043850538251454 UNAVAIL invalid label sdd ONLINE sdf UNAVAIL invalid label sdn1 ONLINE sdq ONLINE 8010871872874012596 UNAVAIL invalid label sdh ONLINE root@Tower:~# zpool import -d /dev/disk/by-id pool: zfs id: 6938928784629722333 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: zfs UNAVAIL insufficient replicas raidz3-0 UNAVAIL insufficient replicas wwn-0x5000cca26a9b1d64-part1 ONLINE wwn-0x5000cca26a3f4e04 ONLINE sdi UNAVAIL invalid label wwn-0x5000cca26a98c70c-part1 ONLINE wwn-0x5000cca25159d4b8 ONLINE 6707043850538251454 UNAVAIL invalid label wwn-0x5000cca2513d58b0 ONLINE sdf UNAVAIL invalid label wwn-0x5000cca26a8546a8-part1 ONLINE wwn-0x5000cca26a3d1844 ONLINE 8010871872874012596 UNAVAIL invalid label wwn-0x5000cca26a3fa138 ONLINE root@Tower:~# tower-diagnostics-20240719-0853.zip
July 19, 2024Jul 19 Community Expert That doesn't look importable, unless you could fix one of the unavailable devices, post the output of: fdisk -l /dev/sdX for the 4 unavail devices.
July 19, 2024Jul 19 Author How would I "fix" one? I know I haven't had 4 devices fail. root@Tower:~# fdisk -l /dev/sdi Disk /dev/sdi: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors Disk model: HUH721010AL4200 Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 126BEC7F-4B6C-47FE-B1D5-B1C8613A8F3D Device Start End Sectors Size Type /dev/sdi1 8 2441609210 2441609203 9.1T Linux filesystem root@Tower:~# fdisk -l /dev/sdf Disk /dev/sdf: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors Disk model: HUH721010AL42C0 Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: FB57F290-5124-44C3-94D0-E788DC47A715 Device Start End Sectors Size Type /dev/sdf1 8 2441609210 2441609203 9.1T Linux filesystem Disk /dev/sde: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors Disk model: HUH721010AL4200 Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: C60CE8CA-1314-4D95-A489-E0D2575A22D1 Device Start End Sectors Size Type /dev/sde1 8 2441609210 2441609203 9.1T Linux filesystem Disk /dev/sdo: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors Disk model: HUH721010AL4200 Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 569DFBDA-AE5A-48BF-B986-B0D773E7D1F0 Device Start End Sectors Size Type /dev/sdo1 8 2441609210 2441609203 9.1T Linux filesystem Edited July 19, 2024Jul 19 by democratic-favour2351
July 19, 2024Jul 19 Community Expert Doesn't look like that pool was created with Unraid, post the same fdisk output from one of the good disks, also the output of: zdb
July 19, 2024Jul 19 Author It was not, brought into Unraid post 6.12. Was created with the community plugin. cannot open '/etc/zfs/zpool.cache': No such file or directory Disk /dev/sdj: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors Disk model: HUH721010AL42C0 Units: sectors of 1 * 4096 = 4096 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 54F2203C-3B45-D64C-A263-C5C3A0D5F6A2 Device Start End Sectors Size Type /dev/sdj1 2048 2441590783 2441588736 9.1T Solaris /usr & Apple ZFS /dev/sdj9 2441590784 2441607167 16384 64M Solaris reserved 1
July 19, 2024Jul 19 Community Expert That partition layout is completely different from the 4 bad disks, and that one is consistent with a zpool created with the command line, not the other ones, check if all partitions for the good disks are the same as sdj
July 19, 2024Jul 19 Author Seems like I have missing partitions. Anyway to restore the ZFS ones? So perplexed that they disappeared on a reboot.
July 19, 2024Jul 19 Author Even one that detects that zpool import says is online doesn't have that ZFS partition...
July 19, 2024Jul 19 Community Expert If the disks are all the same capacity you can try to copy the partition layout from one of the good disks to the bad ones, try with just one first: https://forums.unraid.net/topic/141059-lost-14tb-zfs-pool-after-612/?do=findComment&comment=1276198 May need to reboot after running the command for the new partition layout to be detected.
July 19, 2024Jul 19 Author Seemed to work, but unfortunately, the ZFS pool says faulted. Guess thats it then? zpool import pool: zfs id: 6938928784629722333 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: zfs FAULTED corrupted data raidz3-0 DEGRADED sdg1 UNAVAIL sdj ONLINE sdi ONLINE sdb1 UNAVAIL sdc ONLINE sde ONLINE sdd ONLINE sdf ONLINE sdn1 UNAVAIL invalid label sdq ONLINE sdo ONLINE sdh ONLINE
July 19, 2024Jul 19 Author I unfortunately am getting an I/O error. zpool import -o readonly=on zfs cannot import 'zfs': I/O error Destroy and re-create the pool from a backup source.
July 20, 2024Jul 20 Community Expert You can try repairing the other 3 partitions, but probably the labels aren't the only problem.
July 31, 2024Jul 31 Author Solution Well, I did try repairing the remaining partitions, unfortunately I mixed up one of the drives and it damaged a good partition, which seemed to screw me. In the end, I gambled on Klennet ZFS Recovery, took many days of scanning and paid a lot of money for the software. But in the end, I was able to recover my important files and I will be rebuilding the pool inside Unraid. Now onto making a proper cloud backup of important files. Can't put it off anymore. Thanks you for the help @JorgeB on this one! I very much appreciated it. Edited July 31, 2024Jul 31 by democratic-favour2351
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.