dynamis_dk Posted April 4, 2020 Share Posted April 4, 2020 With the whole lock down situation happening in the UK I was doing a few home improvement jobs which required the power turning off into the house. My Unraid server had a graceful shutdown (I manually turned off all dockers and VM's, powered down via the 'Power Down' button on main tab. Powered off UPS, killed the house power and carried on for the afternoon. I've now come back to powering up my unraid again and I'm getting a "Unmountable: No file system". Now I did some testing with a nvme drive a few months back and with a lack of understanding on my part I ended up with a cache pool and I managed to search enough info to get myself back in a working state so until the reboot today I've been able to user my 500GB SSD as my cache drive and the nvme drive is just mounted using the unassigned drives tool so I could copy a few files back and forth to check speeds out. I've done a bit of digging on the forum to see if I can find guidelines on how to fix myself back up again but I'm a little cautious to do anything without assistance as I'm very much hoping my data isn't gone. It would seem I've still managed to get a cache pool in the behind the scenes somewhere. I've seen this posted so hope this info start to help. root@unraid:/dev# btrfs fi show Label: none uuid: 49098d04-e56e-4515-81b0-dbca32aa2579 Total devices 1 FS bytes used 392.00KiB devid 1 size 1.00GiB used 174.38MiB path /dev/loop2 Label: none uuid: 478f1048-7afe-4109-aa57-974abe73591a Total devices 2 FS bytes used 141.54GiB devid 1 size 465.76GiB used 147.01GiB path /dev/sdi1 *** Some devices missing I've attached the log from the boot up to check over. unraid-syslog-20200404-1622.zip Only thing I've tried is to stop the array, remove the cache drive, start it up and confirm no errors, stop array, assign drive back as cache, start array and confirm its still showing as not mountable. Any advice on getting back up and running again please Quote Link to comment
JorgeB Posted April 4, 2020 Share Posted April 4, 2020 There's a missing device: Apr 4 16:57:41 unraid kernel: BTRFS error (device sdi1): devid 2 uuid 18f07b0e-c359-4209-9561-aa7f3470d5d6 is missing Depending on which profile the pool was using, i.e., if it was redundant, it might mount in degraded mode, Unraid should try that if you set cache slots to >1. If that doesn't work there are some recovery options here. Quote Link to comment
dynamis_dk Posted April 4, 2020 Author Share Posted April 4, 2020 As I'm not looking to use a pool, does selecting a cache number of disks as 1 delete the pool config in the background, setting the drive to a single disk? I'll give those recovery options a go to see if I can at least get back the current data, its not hugely important as I've got a backup from Feb - I was joping there might be a repair option which just lets me mount the drive again and set the cache to no pool, single disk lol Quote Link to comment
JorgeB Posted April 5, 2020 Share Posted April 5, 2020 13 hours ago, dynamis_dk said: does selecting a cache number of disks as 1 delete the pool config in the background No, but it doesn't try to load a pool member, which your device is. Quote Link to comment
dynamis_dk Posted April 6, 2020 Author Share Posted April 6, 2020 Thanks JB, I've managed to mount it and copy off some of the bits I've needed. My downloads folder has already been processed and my docker needed recreating anyway from an earlier beta bug so thankfully I'm now back up and running again. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.