Force-degrade pool


Go to solution Solved by JorgeB,

Recommended Posts

Since my ZFS pool fails to mount, I thought maybe I could nudge it into working by changing enough things about it so the system is forced to look into it…or something. It's a ZRAID, it should be able to work with any 1 missing disk. I removed 1 disk and booted the system.

 

I log in to mount it manually the pool (auto-mount is temporarily disabled) but it tells me I have a missing cache disk and won't let me start the array without extra steps. I'm not sure what it's talking about since I don't have a cache anything; it's an all-flash ZFS pool, there's no need. I have one other obligatory disk in the regular array but it's empty and doesn't have any of the supporting pools (parity, cache).

 

1425464214_ScreenShot2023-11-22at02_02_23.thumb.png.a955d2610a2bae7a91f913d536a17581.png

 

 

Does it refer to this pool – the ZRAID one – as the cache?

 

And followup: when it says "remove the missing cache disk" does it mean the individual disk drive or the volume "disk" would be?

 

And followup of the followup: if it means the whole pool, how do I get it to mount degraded then?

 

I can't remove one disk while mounted to degrade it live because that's why I'm trying to get it to mount degraded, it won't mount when it's normal.

 

C'est tout, thanks.

Link to comment

It shows up ! :D

 

Last login: Wed Nov 22 03:15:25 on ttys003
[Wed22@ 5:36:29][v@zx9:~] $ssh zx3
Linux 6.1.49-Unraid.
[Wed22@ 5:36:32][root@zx3:~] #〉zpool import
   pool: disk1
     id: 9807385397724693529
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	disk1       ONLINE
	  sdd1      ONLINE

   pool: alpha
     id: 1551723972850019203
  state: DEGRADED
status: One or more devices contains corrupted data.
 action: The pool can be imported despite missing or damaged devices.  The
	fault tolerance of the pool may be compromised if imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
 config:

	alpha       DEGRADED
	  raidz1-0  DEGRADED
	    sdb1    ONLINE
	    sdc1    ONLINE
	    sdd1    UNAVAIL  invalid label
[Wed22@ 5:36:35][root@zx3:~] #〉zpool list
no pools available
[Wed22@ 5:38:17][root@zx3:~] #〉

 

sdd1 is not inserted, BTW. No idea why it says invalid label. Now I just need to figure out the order to start, if I should start at all, the array.

 

I think it should be mountable without starting the array with zpool import alpha though I'll continue reading — I'm using the manpages from Fedora 39's ZFS — a bit more for clues, then I'll skim Unraid's docu one last time. It's very little VM data, of which I have a backup, or rather a version, but the one in these disks has been OCPDed to the max.

 

Thank you ! ❤️

Link to comment

Scratch that, I couldn't wait.

 

I did zpool import disk1 to import the main array without starting the thing, it succeeded. Therefore, that's the asnwer to the question.

 

However, in regards to what I wanted to do, it didn't quite work, the problem that appeared earlier when attempting mounting the pool the system would hang forever (and show some micro kpanics) is still there.

 

[Wed22@ 6:15:58][root@zx3:~] #〉zpool import alpha

Message from syslogd@zx3 at Nov 22 06:16:46 ...
 kernel:VERIFY3(size <= rt->rt_space) failed (281442912784384 <= 2054406144)

Message from syslogd@zx3 at Nov 22 06:16:46 ...
 kernel:PANIC at range_tree.c:436:range_tree_remove_impl()

 

I haven't given up though, I think I still might have an idea or two.

 

Thanks! =]

Link to comment

Thanks,

 

I'm sorry, I meant to answer earlier but I got food poisoning, slept-on-the-shower-food-poisoning. 🤢 On the plus side, I may have lost some weight.

 

I'm not using the disks anymore, even if they have not shown any signs of failure, I know they old, but pretending they're good I kept trying different things so I'd know how to proceed when an actual emergency presents itself; the disks wouldn't mount again formatted as ZFS either in their own pool or in the Unraid pool, as a group or by each of themselves (after Tools→New Config, of course).

 

I remembered though, that ZFS format data can persist on a disk even after this has been reformatted to something else, so I changed the format to Btrfs and finally they mounted again. I don't know what to make of it, I'm just leaving it out there for whomever it serves a purpose.

 

Gratzie ancora. 🙇‍♂️

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.