Phillycj Posted March 2, 2021 Share Posted March 2, 2021 Hi, I updated to 6.9 stable today and I tried taking advantage of multiple pools and tried removing 2 of my 4 SSDs from the cache pool. I spun down the array, reset the config for the cache pool and removed the bottom two drives, but did not bring the number of the cache pool down. Upon booting back up, I am constantly getting the "Unmountable: No pool uuid" error across all of my cache drives, even after readding the two removed drives. I tried looking up the error but the one forum post I could find with this error wasn't all that helpful for my scenario. Is anyone able to give me a solution? Thanks! sol-diagnostics-20210302-1848.zip Quote Link to comment
JorgeB Posted March 2, 2021 Share Posted March 2, 2021 18 minutes ago, Phillycj said: reset the config for the cache pool and removed the bottom two drives Not quite clear what you mean by this, did the pool have data? You can't remove two devices at once and maintain data. 1 Quote Link to comment
Phillycj Posted March 2, 2021 Author Share Posted March 2, 2021 1 minute ago, JorgeB said: Not quite clear what you mean by this, did the pool have data? You can't remove two devices at once and maintain data. The pool has data, yes. I have not attempted to format any of the drives yet. Quote Link to comment
JorgeB Posted March 2, 2021 Share Posted March 2, 2021 3 minutes ago, Phillycj said: I have not attempted to format any of the drives yet. But you unassigned two cache devices and started the array, so they were both wiped: Mar 2 18:40:43 Sol emhttpd: shcmd (1232): /sbin/wipefs -a /dev/sde1 ... Mar 2 18:40:43 Sol emhttpd: shcmd (1234): /sbin/wipefs -a /dev/sdf1 And like mentioned you can' remove two devices at the same time and keep the pool, it needs to be one device at a time (assuming a raid1 or raid10 pool). 1 Quote Link to comment
Phillycj Posted March 2, 2021 Author Share Posted March 2, 2021 6 minutes ago, JorgeB said: But you unassigned two cache devices and started the array, so they were both wiped: Mar 2 18:40:43 Sol emhttpd: shcmd (1232): /sbin/wipefs -a /dev/sde1 ... Mar 2 18:40:43 Sol emhttpd: shcmd (1234): /sbin/wipefs -a /dev/sdf1 And like mentioned you can' remove two devices at the same time and keep the pool, it needs to be one device at a time (assuming a raid1 or raid10 pool). If just the file system on the two drives was wiped, is there any way to restore the fs on these drives while keeping the contents intact? Quote Link to comment
JorgeB Posted March 2, 2021 Share Posted March 2, 2021 Very possibly, but beyond my knowledge, you can likley find some help for that using #btrfs on freenode IRC. 1 Quote Link to comment
Phillycj Posted March 2, 2021 Author Share Posted March 2, 2021 Just now, JorgeB said: Very possibly, but beyond my knowledge, you can likley find some help for that using #btrfs on freenode IRC. Great, thanks for the help! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.