diversario

Members
  • Posts

    8
  • Joined

  • Last visited

diversario's Achievements

Noob

Noob (1/14)

0

Reputation

1

Community Answers

  1. Got it. Opened a PR for updating the release notes about this.
  2. Hmm I upgraded and `zpool import`ed the pool, but after a reboot the pool's exported again. I added `zpool import` to `/boot/config/go` and that seemed to do the trick 🤷‍♂️
  3. Oh, I see. Would upgrading to 6.12 without doing anything with the pool still be ok? I recall that post-upgrade the pool was exported (and I didn't import it manually). Do you know if upgrade + manual import would work or should I just wait for 6.13?
  4. Oh, hrm... I do not remember - I created it just using the zpool CLI, but I don't recall giving it any overrides. I think I was first using the drives with the Unraid's standard configuration, but then decided to use zfs and created a pool. Maybe. Is this a problem for the plugin that zfs is on partition #2?
  5. Oh, ok, I think I get it, thanks! Regarding the 6.12 upgrade - I followed (initially) the instructions saying because this pool was indeed created using that plugin. But upon start the pool remained unimported and it was not present in the UI; before I wrecked it the import command showed the pool still ok and available for import after I added a pool in the UI. Should I've imported the pool manually at that point?
  6. I can't believe that it's all back. @JorgeB how can I express my gratitude? I would love to reward you somehow for rescuing my data, even as you do not have to. I do not understand the commands you posted do, but they sure have worked. Could you please explain what exactly happened here?
  7. root@home:~# zpool import pool: icybox id: 2906712194322086713 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: icybox UNAVAIL insufficient replicas mirror-0 ONLINE sdd2 ONLINE sdc2 ONLINE mirror-1 UNAVAIL insufficient replicas sde2 UNAVAIL sdb2 UNAVAIL root@home:~# So I got this... but I'm not sure what "safe to do the same for the other two disks" means here.
  8. I've been using the ZFS plugin for Unraid for a couple of years with no issue. Today, I figured I should upgrade to 6.12 to get that native ZFS support. I upgraded, then followed the https://docs.unraid.net/unraid-os/release-notes/6.12.0/#zfs-pools part to import my pool. Nothing happened on my first try: the pool was exported but not imported and not mounted, and the disks said "filesystem wrong or not present". I thought - ok, maybe I need to select the devices in a different order in the UI? So I stopped the pool and changed the mapping in the UI. The UI said "wrong" for 3 our of 4 disks, but, seeing how the current mapping did nothing anyway I figured this should be ok. I went on to starting the array with the disks remapped. The visual result was the same: UI saying "filesystem wrong or not present". Ok. Then I SSH'd into the server and ran `zpool import`, and got this: root@home:~# zpool import pool: icybox id: 2906712194322086713 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: icybox UNAVAIL insufficient replicas mirror-0 DEGRADED sdd2 UNAVAIL sdc2 ONLINE mirror-1 UNAVAIL insufficient replicas sde2 UNAVAIL sdb2 UNAVAIL and since then I was unable to access any data in the pool. Import says: root@home:~# zpool import icybox cannot import 'icybox': no such pool or dataset Destroy and re-create the pool from a backup source. which isn't very helpful as I did not back up 14TB of data somewhere else. So, yeah. Not great. Wondering if someone has any ideas on whether or not this is salvageable.