I've been using the ZFS plugin for Unraid for a couple of years with no issue. Today, I figured I should upgrade to 6.12 to get that native ZFS support.
I upgraded, then followed the https://docs.unraid.net/unraid-os/release-notes/6.12.0/#zfs-pools part to import my pool. Nothing happened on my first try: the pool was exported but not imported and not mounted, and the disks said "filesystem wrong or not present".
I thought - ok, maybe I need to select the devices in a different order in the UI? So I stopped the pool and changed the mapping in the UI. The UI said "wrong" for 3 our of 4 disks, but, seeing how the current mapping did nothing anyway I figured this should be ok. I went on to starting the array with the disks remapped. The visual result was the same: UI saying "filesystem wrong or not present". Ok.
Then I SSH'd into the server and ran `zpool import`, and got this:
root@home:~# zpool import
pool: icybox
id: 2906712194322086713
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:
icybox UNAVAIL insufficient replicas
mirror-0 DEGRADED
sdd2 UNAVAIL
sdc2 ONLINE
mirror-1 UNAVAIL insufficient replicas
sde2 UNAVAIL
sdb2 UNAVAIL
and since then I was unable to access any data in the pool. Import says:
root@home:~# zpool import icybox
cannot import 'icybox': no such pool or dataset
Destroy and re-create the pool from
a backup source.
which isn't very helpful as I did not back up 14TB of data somewhere else.
So, yeah. Not great. Wondering if someone has any ideas on whether or not this is salvageable.