GlennCottam Posted December 14, 2020 Share Posted December 14, 2020 Last night I added 2 more disks that where brand new. Letting the server take care of the pre-install by getting it to clean the drives itself. During the process, I started noticing that the dockers starting having issues. Some went down, others stayed up. Thinking it was something to do with the disks, I left it over night while the disks cleaned. Just now looking at the dockers, most dockers are now broken, with either corrupt or missing configuration files. Some have just completely reset themselves like I just installed them. Plex was working last night, but now has a corrupt database. Some dockers are giving me a disk I/O error when attempting to access files or restore databases in my appdata directory. The disks I installed where 2x4TB WD Red's, and a secondary cache ssd. I do have a backup of all the dockers, but it was from last week, and I would be missing out on a weeks worth of data. I am wondering why this happened, can I get the dockers back, and how I can prevent this in the future. Quote Link to comment
trurl Posted December 14, 2020 Share Posted December 14, 2020 If possible before rebooting and preferably with the array started Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 Thank you for your reply, here it is! unraid-diagnostics-20201214-1110.zip Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 After poking around at some of the dockers, I have noticed that some of these I/O errors I have been getting are related to "no space left". I attempted to restore the Plex database from a recent backup, and I received the error "No space left on device". This was inside the terminal. I believe many dockers are receiving I/O errors because of this. Quote Link to comment
JorgeB Posted December 14, 2020 Share Posted December 14, 2020 Cache is completely full, it's also using dual data profiles, you should get a notifications about that (if enable), you need to free up some space and balance it. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 At the moment, my cache ssd has over 300GB free, however I will invoke the mover and see if that helps. In regards to the dual data profiles, I am not sure what you are referring too. Is there a way to remove one? Quote Link to comment
itimpi Posted December 14, 2020 Share Posted December 14, 2020 17 minutes ago, GlennCottam said: At the moment, my cache ssd has over 300GB free, however I will invoke the mover and see if that helps. In regards to the dual data profiles, I am not sure what you are referring too. Is there a way to remove one? With a BTRFS format cache it is possible to get the file system fully allocated even when all the free space is not used. This is normally rectified by running a Balance operation to create more file system nodes. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 After running the balance on the ssd cache, I still am unable to perform the actions and still get the error message. Looking at the balance status after running said balance, it says "No Balance found on /mnt/cache" as shown in the picture I have attached. I also attempted to invoke the mover before balancing. Quote Link to comment
trurl Posted December 14, 2020 Share Posted December 14, 2020 2 hours ago, GlennCottam said: and a secondary cache ssd Unless you intentionally made the cache pool something other than the default raid1, you get a mirror with total capacity equal to the smaller of the disks. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 1 minute ago, trurl said: Unless you intentionally made the cache pool something other than the default raid1, you get a mirror with total capacity equal to the smaller of the disks. Ok this makes more since. If I convert it to raid 0, it should combine the total storage of both SSD's correct? I presumed the SSD cache would be combined by default, rather than raid1. If I do convert, will it erase both ssd's? Quote Link to comment
trurl Posted December 14, 2020 Share Posted December 14, 2020 Just now, GlennCottam said: If I convert it to raid 0, it should combine the total storage of both SSD's correct? If the disks are different sizes, then single mode is the only mode that will give the combined total storage: As for how exactly to get from where you are now to where you need to be without data loss I will defer to @JorgeB Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 2 minutes ago, trurl said: If the disks are different sizes, then single mode is the only mode that will give the combined total storage: As for how exactly to get from where you are now to where you need to be without data loss I will defer to @JorgeB Right, I forgot that raid0 requires both disks to be the same size. Would simply going to each share, telling it to not use the cache, and then invoking the mover, move all the files from cache to the array? Quote Link to comment
JorgeB Posted December 14, 2020 Share Posted December 14, 2020 You should free up some space, then convert to single, or the balance might fail. Quote Link to comment
trurl Posted December 14, 2020 Share Posted December 14, 2020 Mover can't move open files, so you would have to go to Settings and disable Docker and VM Manager. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 3 minutes ago, trurl said: Mover can't move open files, so you would have to go to Settings and disable Docker and VM Manager. Ok so all I need to do is disable docker and VM manager, then invoke the mover? I do not want to perform the wrong thing incase of data loss. Quote Link to comment
trurl Posted December 14, 2020 Share Posted December 14, 2020 2 minutes ago, GlennCottam said: disable docker and VM manager 12 minutes ago, GlennCottam said: going to each share, telling it to not use the cache Mover ignores cache-no and cache-only shares. To get files moved from cache to array the share must be cache-yes. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 5 minutes ago, trurl said: Mover ignores cache-no and cache-only shares. To get files moved from cache to array the share must be cache-yes. Thank you, I have done this and now waiting for the files to move after invoking the mover. Unless I am incorrect, after the cache is empty, I convert the cache to single mode, and then start the dockers back up. Quote Link to comment
trurl Posted December 14, 2020 Share Posted December 14, 2020 10 minutes ago, GlennCottam said: I convert the cache to single mode, and then start the dockers back up. You want to move things back to cache that belong on cache before starting dockers back up. Usually appdata, domains, system shares belong on cache. You will have to set them to cache-prefer and run mover to get them moved. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 Just now, trurl said: You want to move things back to cache that belong on cache before starting dockers back up. Usually appdata, domains, system shares belong on cache. You will have to set them to cache-prefer and run mover to get them moved. Ok perfect! Thank you all for the help! I will let you know in a little bit how it goes. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 The mover emptied the cache, and I converted to single mode, then ran the balancer. Unraid now shows the SSD's in single mode which is I believe what we wanted, however it is still showing "No balance found on 'mnt/cache/'". Could this be a problem? Quote Link to comment
ChatNoir Posted December 14, 2020 Share Posted December 14, 2020 I think this line refers to the status of the Balance operation. If Balance is finished, then it should be normal not to have Balance ongoing. Quote Link to comment
GlennCottam Posted December 14, 2020 Author Share Posted December 14, 2020 I wanted to thank everyone for their help, the issue seams to be resolved. I'm assuming the issue was simply with the cache SSD not being ready to receive a second device. Some dockers now have corrupted config files (I had to roll back a plex database), and some are going to take more time than others to get back. Now knowing what can happen, I now have additional steps to take to make sure this doesn't happen again (stopping docker service, creating backup, copying all files off of ssd etc...). With all this being said, I am assuming the issue was that the SSD was full, and the dockers couldn't write new data too already existing files? If that is true, it would be a smart idea for a end user to add new disks and invoke the SSD mover way before they are full. Also, if what I am assuming is true, should there not be some sort of balancer to move files between HDD's themselves? My friend had similar issues when he installed a new disk after letting his array fill up completely, and presumed it couldn't be the cache as he does not have one. Again thank you everyone for the help! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.