Jump to content

Issues with Cache after moving to new hardware.


TR1PL3D
Go to solution Solved by JorgeB,

Recommended Posts

Hi, I have been trying out Unraid for the last couple of weeks. It's great, wish I had discovered it sooner!

 

I was running Unraid on a test system, a HP Gen 8 Microserver. Worked great, butI decided I needed more CPU power and RAM so moved to a new system. 7700K with 32GB RAM.

 

All went well until I tried to add an NVME drive to my cache pool.

 

I added an 1TB nvme to my existting Cache and now Unraid is saying "Unmountable: No file system" for the drives that were already in the Cache pool. I stopped the Array and and removed the 1TB nvme, but the "Unmountable: No file system" issue is still apperent.

 

The cache pool had two 128GB SSD drives before adding the nvme drive.

 

All of my Docker instances are now missing. Appdata folder is empty.

 

What can I do to bring the Cache back online without losing all of the Docker images I have spent hours setting up?

 

Thanks 😎

Link to comment
1 hour ago, JorgeB said:

It's reporting a missing device, post the output of:

btrfs fi show

 

root@IrisOne:~# btrfs fi show
Label: none  uuid: 72019e6f-c92a-49a2-bf0b-fde569cae569
        Total devices 1 FS bytes used 340.00KiB
        devid    1 size 20.00GiB used 536.00MiB path /dev/loop2

Label: none  uuid: 8ab324e0-0ade-44d1-b003-350c497feab8
        Total devices 1 FS bytes used 412.00KiB
        devid    1 size 1.00GiB used 126.38MiB path /dev/loop3

Label: none  uuid: bea95f7c-fcd4-4b53-8436-c4774afc1081
        Total devices 3 FS bytes used 61.36GiB
        devid    1 size 119.24GiB used 35.03GiB path /dev/sde1
        devid    2 size 119.24GiB used 35.03GiB path /dev/sdg1
        *** Some devices missing

 

Here you go, thanks for your help 😁

Link to comment

Heres the output.

 

root@IrisOne:~# btrfs-select-super -s 1 /dev/nvme0n1p1
using SB copy 1, bytenr 67108864
root@IrisOne:~# btrfs fi show
Label: none  uuid: bea95f7c-fcd4-4b53-8436-c4774afc1081
        Total devices 3 FS bytes used 61.36GiB
        devid    1 size 119.24GiB used 33.03GiB path /dev/sde1
        devid    2 size 119.24GiB used 33.03GiB path /dev/sdg1
        devid    3 size 931.51GiB used 4.00GiB path /dev/nvme0n1p1

 

Link to comment

I got to impatient and started the array after running those commands!

 

It's all working fine, Unraid is not informing me that the drives need formatting and my Docker apps are available again!

 

I'd be interested to learn why this command; btrfs-select-super -s 1 /dev/nvme0n1p1 resolved the issue. Can you tell me what this command did to resolve the issue please.

 

I really appreciate you help, saved me hours of reconfiguring my Docker apps! 😁

Edited by TR1PL3D
Link to comment
5 minutes ago, TR1PL3D said:

Can you tell me what this command did to resolve the issue please.

That command restores the superblock from a backup, the original one was deleted because you unassigned the device and started the array, when you do that Unraid run wipefs on the device, deleting the main superblock.

Link to comment
2 hours ago, JorgeB said:

That command restores the superblock from a backup, the original one was deleted because you unassigned the device and started the array, when you do that Unraid run wipefs on the device, deleting the main superblock.

 

Thanks for the info. And thanks for your help with getting me up and running again. Unraid is awesome 😎

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...