cisellis Posted March 5, 2021 Share Posted March 5, 2021 Okay, so I am trying to figure out if my situation is salvageable. From what I can tell from this thread: I upgraded to 6.9 and my cache drive became unavailable. My appdata was on there and I was using xfs on the cache drive because brtfs was unstable previously and I had better luck with xfs. oops. The error was that docker would no longer start and I couldn't figure out why. I rebooted, no change. Stopped and started the service. No change. I downgraded by to 6.83 and got errors about not being able to access docker.img. All my shares were still missing and the main page is blank. I did a restore using the plugin. The restore finished and now I'm getting tons of errors that I am out of space or plugins not loading, etc. Sample from the plugins page: Warning: file_put_contents(): Only 0 of 3886 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix.plugin.manager/include/ShowPlugins.phpon line 136 Uninstalling plugins doesn't work, etc. I can't format the drives because they won't load at all. I can't connect to the server to pull the syslog, etc. It feels like I'm borked because my cache drive was the wrong fs and my only nuclear option is to completely reinstall Unraid and start over. Thoughts on whether I have any other options and whether my data is salvageable or not? Thanks in advance for any help. Quote Link to comment
JorgeB Posted March 5, 2021 Share Posted March 5, 2021 Unmontable/unassigned cache won't prevent array start, but difficult to say more without diags. Quote Link to comment
cisellis Posted March 5, 2021 Author Share Posted March 5, 2021 So far I can't figure out how to get to any diags since nothing will load. I think I'm able to dump the usb and the cache drive to disk taking them out of the server. Also, the array started last time, there are just zero disks and the only buttons are reboot and one other. Quote Link to comment
cisellis Posted March 5, 2021 Author Share Posted March 5, 2021 Okay, just figured this out I think. For anyone else that ends up here...I saw another topic here: And they mention the disk.config being renamed during the 6.9 install as such: This is because to support multiple pools, code detects the upgrade to 6.9.0 and moves the 'cache' device settings out of 'config/disk.cfg' and into 'config/pools/cache.cfg'. If you downgrade back to 6.8.3 these settings need to be restored. Since I had downgraded to 6.8.3, I put the usb in another computer, restored the config/disk.cfg file from .bak and rebooted Unraid. All my shares are back and I'm on 6.8.3. Not getting any errors any more. I'm going to set everything on cache to 'no', run the mover, format the cache to btrfs, make sure I run the backup, and try the upgrade to 6.9 again. Quote Link to comment
JorgeB Posted March 5, 2021 Share Posted March 5, 2021 6 minutes ago, cisellis said: and try the upgrade to 6.9 again. If it fails grab diags in v6.9 before rebooting. Quote Link to comment
cisellis Posted March 5, 2021 Author Share Posted March 5, 2021 Will do. Thanks. Also, I forgot that you have to set cache to 'yes'...THEN run the mover. if you just set it to no, it leaves the files stranded out there. Quote Link to comment
kernelpanic Posted March 5, 2021 Share Posted March 5, 2021 @cisellis So is the initial problem here that your cache drive was formatted as XFS? I just upgraded to 6.9 and getting no GUI. Once that's fixed, I'm hoping I don't have this same drama with the cache as mine is formatted as XFS as well. Quote Link to comment
cisellis Posted March 6, 2021 Author Share Posted March 6, 2021 (edited) Alrighty. Had to take care of work, dinner, etc. The initial problem does seem to be having my cache drive as xfs. When I did the upgrade Unraid lost access to the cache drive. Somewhere along the way it also corrupted my docker.img. I moved everything off cache, unmounted it, formatted it to BTRFS, remounted it, and did the upgrade again. After the upgrade, it kicked the cache out as unassigned, but it let me format it and add it right back to the array with no issues. Also, it let me format it as xfs by default. Docker had the same issue as before, docker failed to start. I looked into that and this time I got a bunch of btrfs errors about failed reads around docker. Reading up on that, it seems like my docker.img was corrupted when the upgrade rendered the cache drive wonky. I deleted the docker.img and restarted the service. It loaded fine and I'm re-adding my containers from templates now. Looks to be salvaged. Probably time for some docker cleanup anyway Edited March 6, 2021 by cisellis clarification Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.