Jump to content

Invalid Labels, pool no longer available after a restart, can't import

Featured Replies

Posted

Please help, I am thinking I am screwed, but I don't understand some of these errors.

 

Trying to reimport a pool, but get an "invalid label" How do I correct that error?

 

Any chance of importing this pool in a read only state? Ive been unsuccessful thus far.

 

status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

        zfs                      UNAVAIL  insufficient replicas
          raidz3-0               UNAVAIL  insufficient replicas
            sdg1                 ONLINE
            sdj                  ONLINE
            sdi                  UNAVAIL  invalid label
            sdb1                 ONLINE
            sdc                  ONLINE
            6707043850538251454  UNAVAIL  invalid label
            sdd                  ONLINE
            sdf                  UNAVAIL  invalid label
            sdn1                 ONLINE
            sdq                  ONLINE
            8010871872874012596  UNAVAIL  invalid label
            sdh                  ONLINE
root@Tower:~# zpool import -d /dev/disk/by-id
   pool: zfs
     id: 6938928784629722333
  state: UNAVAIL
status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

        zfs                               UNAVAIL  insufficient replicas
          raidz3-0                        UNAVAIL  insufficient replicas
            wwn-0x5000cca26a9b1d64-part1  ONLINE
            wwn-0x5000cca26a3f4e04        ONLINE
            sdi                           UNAVAIL  invalid label
            wwn-0x5000cca26a98c70c-part1  ONLINE
            wwn-0x5000cca25159d4b8        ONLINE
            6707043850538251454           UNAVAIL  invalid label
            wwn-0x5000cca2513d58b0        ONLINE
            sdf                           UNAVAIL  invalid label
            wwn-0x5000cca26a8546a8-part1  ONLINE
            wwn-0x5000cca26a3d1844        ONLINE
            8010871872874012596           UNAVAIL  invalid label
            wwn-0x5000cca26a3fa138        ONLINE
root@Tower:~#

tower-diagnostics-20240719-0853.zip

Solved by democratic-favour2351

Go to solution
  • Community Expert

That doesn't look importable, unless you could fix one of the unavailable devices, post the output of:

 

fdisk -l /dev/sdX

 

for the 4 unavail devices.

  • Author

How would I "fix" one? I know I haven't had 4 devices fail.

 

root@Tower:~# fdisk -l /dev/sdi
Disk /dev/sdi: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors
Disk model: HUH721010AL4200
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 126BEC7F-4B6C-47FE-B1D5-B1C8613A8F3D

Device     Start        End    Sectors  Size Type
/dev/sdi1      8 2441609210 2441609203  9.1T Linux filesystem

 

root@Tower:~# fdisk -l /dev/sdf
Disk /dev/sdf: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors
Disk model: HUH721010AL42C0
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: FB57F290-5124-44C3-94D0-E788DC47A715

Device     Start        End    Sectors  Size Type
/dev/sdf1      8 2441609210 2441609203  9.1T Linux filesystem

 

Disk /dev/sde: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors
Disk model: HUH721010AL4200
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C60CE8CA-1314-4D95-A489-E0D2575A22D1

Device     Start        End    Sectors  Size Type
/dev/sde1      8 2441609210 2441609203  9.1T Linux filesystem

 

Disk /dev/sdo: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors
Disk model: HUH721010AL4200
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 569DFBDA-AE5A-48BF-B986-B0D773E7D1F0

Device     Start        End    Sectors  Size Type
/dev/sdo1      8 2441609210 2441609203  9.1T Linux filesystem

Edited by democratic-favour2351

  • Community Expert

Doesn't look like that pool was created with Unraid, post the same fdisk output from one of the good disks, also the output of:

zdb

 

  • Author

It was not, brought into Unraid post 6.12. Was created with the community plugin.

 

cannot open '/etc/zfs/zpool.cache': No such file or directory

 

Disk /dev/sdj: 9.1 TiB, 10000831348736 bytes, 2441609216 sectors
Disk model: HUH721010AL42C0
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 54F2203C-3B45-D64C-A263-C5C3A0D5F6A2

Device          Start        End    Sectors  Size Type
/dev/sdj1        2048 2441590783 2441588736  9.1T Solaris /usr & Apple ZFS
/dev/sdj9  2441590784 2441607167      16384   64M Solaris reserved 1

  • Community Expert

That partition layout is completely different from the 4 bad disks, and that one is consistent with a zpool created with the command line, not the other ones, check if all partitions for the good disks are the same as sdj

  • Author

Seems like I have missing partitions. Anyway to restore the ZFS ones? So perplexed that they disappeared on a reboot.

  • Author

Even one that detects that zpool import says is online doesn't have that ZFS partition...

  • Author

Seemed to work, but unfortunately, the ZFS pool says faulted.

 

Guess thats it then?

 

zpool import
   pool: zfs
     id: 6938928784629722333
  state: FAULTED
status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

        zfs         FAULTED  corrupted data
          raidz3-0  DEGRADED
            sdg1    UNAVAIL
            sdj     ONLINE
            sdi     ONLINE
            sdb1    UNAVAIL
            sdc     ONLINE
            sde     ONLINE
            sdd     ONLINE
            sdf     ONLINE
            sdn1    UNAVAIL  invalid label
            sdq     ONLINE
            sdo     ONLINE
            sdh     ONLINE

  • Author

I unfortunately am getting an I/O error.

 

zpool import -o readonly=on zfs
cannot import 'zfs': I/O error
        Destroy and re-create the pool from
        a backup source.

  • Community Expert

You can try repairing the other 3 partitions, but probably the labels aren't the only problem.

  • 2 weeks later...
  • Author
  • Solution

Well, I did try repairing the remaining partitions, unfortunately I mixed up one of the drives and it damaged a good partition, which seemed to screw me. In the end, I gambled on Klennet ZFS Recovery, took many days of scanning and paid a lot of money for the software. But in the end, I was able to recover my important files and I will be rebuilding the pool inside Unraid.

 

Now onto making a proper cloud backup of important files. Can't put it off anymore.

 

Thanks you for the help @JorgeB on this one! I very much appreciated it.

Edited by democratic-favour2351

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...