qwaven Posted May 31, 2021 Share Posted May 31, 2021 Hi all, Wondering if anyone can help me figure this out. I've run the unraid updater to the latest stable 6.9.2. After the reboot I've noticed my docker now has no containers (had 3) and when I try and browse my appdata share it is showing up empty but the unraid gui says it contains data. I see two things on the main page that seem possible issues but I do not really want to touch anything in case I make things worse. 1. There is a disk that is now listed as unmountable. However my array seems ok? To be clear I am not 100% sure if this was even part of my array before. It looks like it would be some sort of SSD drive which if I had used it, it would only have been some sort of cache type drive. 2. The unnasigned devices plugin I had before is now showing this: Fatal error: Uncaught Error: Call to undefined function _() in /usr/local/emhttp/plugins/dynamix/include/Helpers.php:35 Stack trace: #0 /usr/local/emhttp/plugins/unassigned.devices/UnassignedDevices.php(308): my_scale(120.034123776, NULL) #1 {main} thrown in /usr/local/emhttp/plugins/dynamix/include/Helpers.php on line 35 DEVICEIDENTIFICATION It was working perfectly fine before the upgrade. Thoughts anyone? Cheers! Quote Link to comment
Squid Posted May 31, 2021 Share Posted May 31, 2021 Post your diagnostics 2 hours ago, qwaven said: unnasigned devices plugin Make sure that it's up to date. After that, post within the UD support thread. Quote Link to comment
qwaven Posted May 31, 2021 Author Share Posted May 31, 2021 Hi Squid thanks a lot for the reply. I am used to another system, was expecting the packages to be updated when the system was updated. Updating the unassigned devices seems to have solved that error. However I am still having the main issue with getting my system working again. My disk 1 is still being listed as "Unmountable: not mounted" which I am believing is why I cannot see any of my docker images. I also cannot locate my app data folder which I am guessing that it would have been on there... I am not clear why its not mountable or what I should be doing about this in order to restore my data. If I switch the drive will my parity drive rebuild? Hoping for some direction. Cheers! Quote Link to comment
Squid Posted May 31, 2021 Share Posted May 31, 2021 44 minutes ago, qwaven said: was expecting the packages to be updated when the system was updated. Unraid does not update anything out of the box. To keep things up to date, install the Auto Update plugin. 45 minutes ago, qwaven said: My disk 1 is still being listed as "Unmountable: not mounted" which I am believing is why I cannot see any of my docker images. I also cannot locate my app data folder which I am guessing that it would have been on there... I am not clear why its not mountable or what I should be doing about this in order to restore my data. If I switch the drive will my parity drive rebuild? Hoping for some direction. Which is why you should 2 hours ago, Squid said: Post your diagnostics Quote Link to comment
qwaven Posted May 31, 2021 Author Share Posted May 31, 2021 5 minutes ago, Squid said: Which is why you should Ah yes fair point. Attached. Cheers. diagnostics-20210531-1701.zip Quote Link to comment
Squid Posted May 31, 2021 Share Posted May 31, 2021 Yeah, disk 1 is corrupted and won't mount. Best to wait for @jonathanm or @JorgeB since it's btrfs Quote Link to comment
JorgeB Posted June 1, 2021 Share Posted June 1, 2021 You might need to downgrade back to v6.8 but try this first, with the array started: btrfs-check --clear-space-cache v1 /dev/md1 then btrfs-check --clear-space-cache v2 /dev/md1 Re-start array and see if it mounts, if it still doesn't post new diags. Quote Link to comment
qwaven Posted June 1, 2021 Author Share Posted June 1, 2021 (edited) 7 hours ago, JorgeB said: You might need to downgrade back to v6.8 but try this first, with the array started: btrfs-check --clear-space-cache v1 /dev/md1 then btrfs-check --clear-space-cache v2 /dev/md1 Re-start array and see if it mounts, if it still doesn't post new diags. Hi JorgeB, Thanks for the reply. First I got command not found but realized its actually btrfs check .... (should it be useful for anyone else in the future) no - Anyway did this... # btrfs check --clear-space-cache v1 /dev/md1 Opening filesystem to check... Checking filesystem on /dev/md1 UUID: 08428632-0cca-4d98-b643-3bf1dd2f7a34 Free space cache cleared # btrfs check --clear-space-cache v2 /dev/md1 Opening filesystem to check... Checking filesystem on /dev/md1 UUID: 08428632-0cca-4d98-b643-3bf1dd2f7a34 no free space cache v2 to clear After doing this I noticed a message pop up saying its doing a parity check. Not sure if I should cancel this to restart the array or if I should wait? It looks like it was already running for 9 hours and has about 10 hours to complete. The drive in question still shows the same error. Will leave it running unless told otherwise. Cheers! Edited June 1, 2021 by qwaven Quote Link to comment
JorgeB Posted June 1, 2021 Share Posted June 1, 2021 6 minutes ago, qwaven said: The drive in question still shows the same error. It will for sure until you re-start the array. Quote Link to comment
qwaven Posted June 1, 2021 Author Share Posted June 1, 2021 1 minute ago, JorgeB said: It will for sure until you re-start the array. Would it be better to cancel the parity or should I continue to wait? Cheers! Quote Link to comment
JorgeB Posted June 1, 2021 Share Posted June 1, 2021 It's up to you, if it's an auto check due to unclean shutdown you can cancel since if there were errors found you need to run a correcting check, and if there weren't you can run one later. Quote Link to comment
qwaven Posted June 1, 2021 Author Share Posted June 1, 2021 (edited) 4 minutes ago, JorgeB said: It's up to you, if it's an auto check due to unclean shutdown you can cancel since if there were errors found you need to run a correcting check, and if there weren't you can run one later. It is something it has run on its own. I have not shutdown. Is it perhaps something as a result of the upgrade? Edited June 1, 2021 by qwaven Quote Link to comment
itimpi Posted June 1, 2021 Share Posted June 1, 2021 3 minutes ago, qwaven said: It is something it has run on its own. I have not shutdown. Is it perhaps something as a result of the upgrade? An automatic parity check is only run if UnRaid did not successfully stop the array before a shutdown/reboot or you have one scheduled to be run at regular intervals and the scheduled time has been reached. One of these events must have happened if UnRaid started one without manual action. Quote Link to comment
qwaven Posted June 1, 2021 Author Share Posted June 1, 2021 Checking the history it looks like there is something that starts monthly. As I am not even sure where to set this it seems like a default setting. Seems I should just cancel it then. Cheers. Quote Link to comment
qwaven Posted June 1, 2021 Author Share Posted June 1, 2021 ok so after stopping and starting the array I do not see anything changed. Attached new diag. Curious though. Can the drive be formatted or replaced and the data restored? Or am I missing something with this. Cheers diagnostics-20210601-1121.zip Quote Link to comment
JorgeB Posted June 1, 2021 Share Posted June 1, 2021 That was the most likely result, still since it was aborting when creating the free space cache had some hopes that it could help. Nnewer kernel can detect previously undetected corruptions, you can downgrade back to v6.8.x, backup al the data in that disk then upgrade, format and restore data. Quote Link to comment
qwaven Posted June 1, 2021 Author Share Posted June 1, 2021 5 minutes ago, JorgeB said: That was the most likely result, still since it was aborting when creating the free space cache had some hopes that it could help. Nnewer kernel can detect previously undetected corruptions, you can downgrade back to v6.8.x, backup al the data in that disk then upgrade, format and restore data. ok so the downgrade seems to have worked. Is there anything special I should do to backup disk1? or do I just cp -R the appdata and system to another drive? Quote Link to comment
JorgeB Posted June 1, 2021 Share Posted June 1, 2021 Any important data there should be copied to another disk(s), you can use cp or midnight commander for example, just remember to copy from disk to disk, not disk to share, .e.g. from /mnt/disk1 to /mnt/disk2 Quote Link to comment
qwaven Posted June 1, 2021 Author Share Posted June 1, 2021 so are we thinking there is a drive issue? IE will I be replacing this drive? and if so would it make more sense to do this now before attempting the upgrade again? Cheers! Quote Link to comment
JorgeB Posted June 1, 2021 Share Posted June 1, 2021 No, it's a filesystem problem, you just need to re-format on v6.9, after all data is backed up. Quote Link to comment
qwaven Posted June 1, 2021 Author Share Posted June 1, 2021 ok I am back at last, upgraded again, and as expected the drive 1 is in the same error state. Just wanted to confirm steps I should be taking. Do I just click the format button at the bottom beside the disk? Anything special I should be doing? After format I copy the data back similar to how I backed up or? Cheers! Quote Link to comment
JonathanM Posted June 1, 2021 Share Posted June 1, 2021 14 minutes ago, qwaven said: After format I copy the data back similar to how I backed up How / where did you back up the data? Commands used? Quote Link to comment
qwaven Posted June 2, 2021 Author Share Posted June 2, 2021 15 hours ago, jonathanm said: How / where did you back up the data? Commands used? I copied it to another drive as specified to do earlier. I used rsync. Quote Link to comment
JorgeB Posted June 2, 2021 Share Posted June 2, 2021 If everything is copied you just use the format button then restore the data back. 1 Quote Link to comment
qwaven Posted June 2, 2021 Author Share Posted June 2, 2021 Hi all, Thanks for all the help here. I've done the format and transferred everything back. Weird part (they are docker containers) was that after I copied and started up docker. 2 of 3 containers were running. I ran the updater to pull the new content and then after all my containers were removed again! I'm not understanding what happened here. I re copied over my backup of the docker.img file. My container data was still on drive. And again 2 of 3 containers were running again. The 3rd container possibly has some compatibility issue / more investigation needed but that is out of scope for this thread. I've removed the settings conflicting (still need to check it later) and now all 3 containers are running even after going through an update. No idea what happened the first time around or why it was all removed. Anyway seems like all is working well now. Thanks again! Cheers! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.