TR1PL3D Posted February 5, 2023 Share Posted February 5, 2023 Hi, I have been trying out Unraid for the last couple of weeks. It's great, wish I had discovered it sooner! I was running Unraid on a test system, a HP Gen 8 Microserver. Worked great, butI decided I needed more CPU power and RAM so moved to a new system. 7700K with 32GB RAM. All went well until I tried to add an NVME drive to my cache pool. I added an 1TB nvme to my existting Cache and now Unraid is saying "Unmountable: No file system" for the drives that were already in the Cache pool. I stopped the Array and and removed the 1TB nvme, but the "Unmountable: No file system" issue is still apperent. The cache pool had two 128GB SSD drives before adding the nvme drive. All of my Docker instances are now missing. Appdata folder is empty. What can I do to bring the Cache back online without losing all of the Docker images I have spent hours setting up? Thanks 😎 Quote Link to comment
trurl Posted February 5, 2023 Share Posted February 5, 2023 Attach diagnostics to your NEXT post in this thread. Quote Link to comment
TR1PL3D Posted February 5, 2023 Author Share Posted February 5, 2023 Hopefully I have attched the correct information 😁 irisone-diagnostics-20230205-2221.zip Quote Link to comment
JorgeB Posted February 6, 2023 Share Posted February 6, 2023 It's reporting a missing device, post the output of: btrfs fi show Quote Link to comment
TR1PL3D Posted February 6, 2023 Author Share Posted February 6, 2023 1 hour ago, JorgeB said: It's reporting a missing device, post the output of: btrfs fi show root@IrisOne:~# btrfs fi show Label: none uuid: 72019e6f-c92a-49a2-bf0b-fde569cae569 Total devices 1 FS bytes used 340.00KiB devid 1 size 20.00GiB used 536.00MiB path /dev/loop2 Label: none uuid: 8ab324e0-0ade-44d1-b003-350c497feab8 Total devices 1 FS bytes used 412.00KiB devid 1 size 1.00GiB used 126.38MiB path /dev/loop3 Label: none uuid: bea95f7c-fcd4-4b53-8436-c4774afc1081 Total devices 3 FS bytes used 61.36GiB devid 1 size 119.24GiB used 35.03GiB path /dev/sde1 devid 2 size 119.24GiB used 35.03GiB path /dev/sdg1 *** Some devices missing Here you go, thanks for your help 😁 Quote Link to comment
Solution JorgeB Posted February 6, 2023 Solution Share Posted February 6, 2023 With the array stopped post the output of: btrfs-select-super -s 1 /dev/nvme0n1p1 after doing that post again btrfs fi show Quote Link to comment
TR1PL3D Posted February 6, 2023 Author Share Posted February 6, 2023 Heres the output. root@IrisOne:~# btrfs-select-super -s 1 /dev/nvme0n1p1 using SB copy 1, bytenr 67108864 root@IrisOne:~# btrfs fi show Label: none uuid: bea95f7c-fcd4-4b53-8436-c4774afc1081 Total devices 3 FS bytes used 61.36GiB devid 1 size 119.24GiB used 33.03GiB path /dev/sde1 devid 2 size 119.24GiB used 33.03GiB path /dev/sdg1 devid 3 size 931.51GiB used 4.00GiB path /dev/nvme0n1p1 Quote Link to comment
JorgeB Posted February 6, 2023 Share Posted February 6, 2023 Now unassign all three pool devices, start the array, stop the array, re-assign all three pool devices, start array, post new diags. Quote Link to comment
TR1PL3D Posted February 6, 2023 Author Share Posted February 6, 2023 (edited) I got to impatient and started the array after running those commands! It's all working fine, Unraid is not informing me that the drives need formatting and my Docker apps are available again! I'd be interested to learn why this command; btrfs-select-super -s 1 /dev/nvme0n1p1 resolved the issue. Can you tell me what this command did to resolve the issue please. I really appreciate you help, saved me hours of reconfiguring my Docker apps! 😁 Edited February 6, 2023 by TR1PL3D Quote Link to comment
JorgeB Posted February 6, 2023 Share Posted February 6, 2023 5 minutes ago, TR1PL3D said: Can you tell me what this command did to resolve the issue please. That command restores the superblock from a backup, the original one was deleted because you unassigned the device and started the array, when you do that Unraid run wipefs on the device, deleting the main superblock. Quote Link to comment
TR1PL3D Posted February 6, 2023 Author Share Posted February 6, 2023 2 hours ago, JorgeB said: That command restores the superblock from a backup, the original one was deleted because you unassigned the device and started the array, when you do that Unraid run wipefs on the device, deleting the main superblock. Thanks for the info. And thanks for your help with getting me up and running again. Unraid is awesome 😎 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.