Fiala06 Posted March 20, 2023 Share Posted March 20, 2023 One of my cache drives (2 total) is failing. So I went to replace it now I'm getting "Unmountable: Invalid pool config" I've tried setting the failed disk to not installed, start and stop the array but doesn't seem to matter what I do I can't get this to start again. Any ideas? unraid-diagnostics-20230320-1639.zip Quote Link to comment
Fiala06 Posted March 21, 2023 Author Share Posted March 21, 2023 So if I remove the cache pool and create a new one with a single drive, it works fine. As soon as I add a new 2nd drive I get the same error. Even tried formatting the new ssd before adding it to the cache. Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 Post the output of btrfs fi show Quote Link to comment
Fiala06 Posted March 21, 2023 Author Share Posted March 21, 2023 (edited) 4 hours ago, JorgeB said: Post the output of btrfs fi show root@UNRAID:~# btrfs fi show Label: none uuid: 8259fa8c-4b09-4304-aed8-6fe58d49323c Total devices 2 FS bytes used 581.62GiB devid 1 size 931.51GiB used 589.03GiB path /dev/nvme0n1p1 devid 4 size 0 used 0 path /dev/sdq1 MISSING Label: none uuid: cd6041b9-7b7e-4203-8bd5-38bc9167f1e4 Total devices 2 FS bytes used 4.11GiB devid 1 size 465.76GiB used 7.03GiB path /dev/sdt1 devid 3 size 931.51GiB used 7.03GiB path /dev/sdh1 Label: none uuid: c854bdcc-ee55-4a4f-bd72-a29f5222a437 Total devices 2 FS bytes used 834.26GiB devid 1 size 931.51GiB used 839.03GiB path /dev/sdm1 devid 3 size 1.82TiB used 839.03GiB path /dev/sdy1 Label: none uuid: 3e52a2e4-0036-41a0-bdd7-4e133ef6acb1 Total devices 1 FS bytes used 22.38GiB devid 1 size 80.00GiB used 26.02GiB path /dev/loop2 the nvme011p1 is the drive was replacing. The other drive on the cache pool I did replace about a week ago also which is sdq Here you can see the cache pool is working with the single disk (sdq) Edited March 21, 2023 by Fiala06 Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 Mar 20 16:26:15 UNRAID kernel: BTRFS: device fsid 8259fa8c-4b09-4304-aed8-6fe58d49323c devid 1 transid 65222504 /dev/nvme0n1p1 scanned by udevd (1051) Mar 20 16:26:15 UNRAID kernel: BTRFS: device fsid 8259fa8c-4b09-4304-aed8-6fe58d49323c devid 4 transid 65213478 /dev/sdq1 scanned by udevd (1051) Not sure what happened here exactly but they are out of sync, note the different trasnids, the NVMe device has a more recent fileystem, also I'm not quite following this: 41 minutes ago, Fiala06 said: the nvme011p1 is the drive was replacing. The other drive on the cache pool I did replace about a week ago also which is sdq Which devices were part of the original pool and in what order did you attempt to replace them? Pool is kind of a mess at the momento, it might be easier to just backup current data and re-format. Quote Link to comment
Fiala06 Posted March 21, 2023 Author Share Posted March 21, 2023 This was the original pool: I replaced the 870 EVO (sds) with the CT1000MX500SSD1 (sdq) about a week ago. Everything went well. Fast forward, 2 days ago the nvmeOn1 started throwing all sorts of errors (tested and is failing). Ordered a new drive, attempted to put it in yesterday and here we are. Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 Any idea then why sdq is devid 4? A two member pool with have devices 1 and 2, and the replacements would inherit the same ids, 1 and 2. Quote Link to comment
Fiala06 Posted March 21, 2023 Author Share Posted March 21, 2023 (edited) Maybe from me trying to get them to work, starting and stopping the array and moving them around? I'm really not sure. Edit: Since it's working now with a single cache drive, could I just do a new config? Then add the 2nd cache? Would that reset the ids? Edited March 21, 2023 by Fiala06 Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 No, but if you wipe the other device first (with for example blkdiscard) it should work. Quote Link to comment
Fiala06 Posted March 21, 2023 Author Share Posted March 21, 2023 5 minutes ago, JorgeB said: No, but if you wipe the other device first (with for example blkdiscard) it should work. I've never used that before so blkdiscard /dev/nvme0n1? Since that's the drive I will no longer be using? Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 Add -f to force since there will be a filesystem there: blkdiscard -f /dev/nvme0n1 Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 Also and before doing anything make sure backups are up to date, just in case. Quote Link to comment
Fiala06 Posted March 21, 2023 Author Share Posted March 21, 2023 3 hours ago, JorgeB said: Also and before doing anything make sure backups are up to date, just in case. Just ran it after backing everything up root@UNRAID:~# blkdiscard -f /dev/nvme0n1 blkdiscard: Operation forced, data will be lost! root@UNRAID:~# ls -lh^C root@UNRAID:~# btrfs fi show Label: none uuid: 8259fa8c-4b09-4304-aed8-6fe58d49323c Total devices 2 FS bytes used 583.02GiB devid 1 size 931.51GiB used 593.03GiB path /dev/nvme0n1p1 devid 4 size 0 used 0 path /dev/sdq1 MISSING Label: none uuid: cd6041b9-7b7e-4203-8bd5-38bc9167f1e4 Total devices 2 FS bytes used 4.11GiB devid 1 size 465.76GiB used 6.03GiB path /dev/sdt1 devid 3 size 931.51GiB used 6.03GiB path /dev/sdh1 Label: none uuid: c854bdcc-ee55-4a4f-bd72-a29f5222a437 Total devices 2 FS bytes used 834.35GiB devid 1 size 931.51GiB used 839.03GiB path /dev/sdm1 devid 3 size 1.82TiB used 839.03GiB path /dev/sdy1 Label: none uuid: 3e52a2e4-0036-41a0-bdd7-4e133ef6acb1 Total devices 1 FS bytes used 22.14GiB devid 1 size 80.00GiB used 26.02GiB path /dev/loop2 Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 It's still showing that fs, and possibly that one was the one getting mounted, reboot and post that output again. Quote Link to comment
Fiala06 Posted March 21, 2023 Author Share Posted March 21, 2023 Rebooted and now the array didn't auto start. Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 Start without any pool device assigned, need to check "I want to do this" next to start button, then stop array, re-assign pool device and start array. Quote Link to comment
Fiala06 Posted March 21, 2023 Author Share Posted March 21, 2023 Well they both now say Unmountable: Wrong or no file system Quote Link to comment
Solution JorgeB Posted March 21, 2023 Solution Share Posted March 21, 2023 I think you should just reformat and restore from the backup, something weird happened there. Quote Link to comment
Fiala06 Posted March 22, 2023 Author Share Posted March 22, 2023 6 hours ago, JorgeB said: I think you should just reformat and restore from the backup, something weird happened there. Thanks again for all your help and quick replies! I managed to format both new drives and just finished restoring everything. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.