offroadguy56 Posted August 12, 2023 Share Posted August 12, 2023 I wasn't able to access my services on my server. I log in to find that most of my docker containers had stopped except for a few still running. I attempted to run one of them but was given a 403 error code. I did a quick search and saw that it was in reference to a full cache pool. I checked my cache pool and it still showed plenty of free space. I then restarted the server and was met with my cache drives being unmountable. The process I performed which may have lead to the cache's demise was such: I had a m.2 drive I was on a time crunch to backup. I had a m.2 to USB adapter on order but was afraid it would not arrive on time. I would need to use the m.2 slots in the server, but to reduce risk of data corruption if I took them out I began transferring data off the cache pool to the array by changing "Use cache pool: prefer" to "no" and evoking the mover. The USB adapter did arrive in time and I backed up my m.2 to the array. I left the mover doing its thing until it finished moving my various share's files to the array. I then set my shares back to "prefer" and I noticed that some files had a duplicate stored on the array and on the cache specifically 2 appdata folders, the docker container image, and my VM image. According to unraid they were stored both on Disk4 (or Disk6) and Cache. I evoked mover again and the duplicates didn't disappear. I restarted then evoked mover once more and the duplicates remained still. Some time later I reach the 403 error and after a restart the cache pool is unmountable now. Looking for assistance in trouble shooting the issue. I have had cache problems before due to my incompetence; I had removed the cache pool and put it back it and I gave it the wrong file system. I have 2 cache drives. 1 TB each. Set up in raid 0 equivalent. Most of their data should be duplicated on the array if unraid Shares tab is correct. Appdata is nowhere on the array but I have an older backup on my personal computer. waffle-diagnostics-20230812-1700.zip Quote Link to comment
itimpi Posted August 13, 2023 Share Posted August 13, 2023 8 hours ago, offroadguy56 said: changing "Use cache pool: prefer" to "no" and evoking the mover. This would not result in mover doing anything as the "No" setting makes mover ignore the share. The setting to move from pool to array is "Yes". If in doubt use the Help built into the GUI to see what would be the correct setting for what you want to do. The other point is that mover will never overwrite duplicates as in normal operation they should not occur. It is up to you to decide which copy to keep and manually delete the other. Quote Link to comment
Solution JorgeB Posted August 13, 2023 Solution Share Posted August 13, 2023 If the log tree is the only problem this may help: btrfs rescue zero-log /dev/nvme0n1p1 The re-start array. Quote Link to comment
offroadguy56 Posted August 13, 2023 Author Share Posted August 13, 2023 12 hours ago, JorgeB said: If the log tree is the only problem this may help: btrfs rescue zero-log /dev/nvme0n1p1 The re-start array. Holy smokes looks like that worked. So glad I didn't screw up the file system trying to fix it myself like I did last time some months ago. Last time I took my cache drive out and accidentally set it back up as btrfs instead of it's original xfs. 16 hours ago, itimpi said: This would not result in mover doing anything as the "No" setting makes mover ignore the share. The setting to move from pool to array is "Yes". If in doubt use the Help built into the GUI to see what would be the correct setting for what you want to do. The other point is that mover will never overwrite duplicates as in normal operation they should not occur. It is up to you to decide which copy to keep and manually delete the other. And you are absolutely correct. My mistake typing the original post. I did set the shares to Yes. I remember anxiously watching the GBs tick by as the pool emptied. I will clean up the duplicates. On a side note do either of you have recommendations of automatically backing up the cache pool as it is not part of the array? Thanks very much for the assistance both of you! 1 Quote Link to comment
itimpi Posted August 14, 2023 Share Posted August 14, 2023 1 hour ago, offroadguy56 said: On a side note do either of you have recommendations of automatically backing up the cache pool as it is not part of the array? Depends what is on the cache pool that needs backing up? There are plugins for backing up appdata and VM vdisks at specified intervals. Quote Link to comment
offroadguy56 Posted August 14, 2023 Author Share Posted August 14, 2023 18 hours ago, itimpi said: Depends what is on the cache pool that needs backing up? There are plugins for backing up appdata and VM vdisks at specified intervals. That's basically what is on it. VM vdisks, appdata, docker image, and system folder. Right now I'd like to just backup appdata and the VM vdisks. And if possible the docker image and system folder. Free space available on the cache is about 200-300gb. So it performs it's cache duties for the most part. But I needed the raid0 setup as I wanted the fastest and biggest (affordable) storage I can offer for the software in my VM. Which I am already considering increasing once again depending how my data hoarding goes. Quote Link to comment
JonathanM Posted August 14, 2023 Share Posted August 14, 2023 2 hours ago, offroadguy56 said: VM vdisks, appdata, docker image, and system folder. While definitely possible to backup the vdisks, it requires shutting down the VM fully to get an accurate backup, which is not ideal. Much better to use a backup utility inside the VM to backup to a location on the array, just like you would a standard hardware based PC. I use UrBackup, it's been a lifesaver. Appdata has its own backup application in the app store, it works well with some attention and tuning on first deployment. Docker image should NOT need to be backed up, the whole point is that the appdata folders contain all the customization and content that isn't written to the array shares. The only exception currently is custom networks, so as long as you keep notes on any custom networks created and redo them before restoring your applications from previous apps in the app store, the docker image rebuilds itself in a matter of minutes. If you accidentally have a container writing settings and data INSIDE the docker image, you need to fix that, as it will create other issues besides restoring data in the event you need to recreate the docker image file. The system folder contains the VM definitions as well, but the appdata backup app has provisions to save those. Quote Link to comment
offroadguy56 Posted August 14, 2023 Author Share Posted August 14, 2023 19 minutes ago, JonathanM said: While definitely possible to backup the vdisks, it requires shutting down the VM fully to get an accurate backup, which is not ideal. Much better to use a backup utility inside the VM to backup to a location on the array, just like you would a standard hardware based PC. I use UrBackup, it's been a lifesaver. Appdata has its own backup application in the app store, it works well with some attention and tuning on first deployment. Docker image should NOT need to be backed up, the whole point is that the appdata folders contain all the customization and content that isn't written to the array shares. The only exception currently is custom networks, so as long as you keep notes on any custom networks created and redo them before restoring your applications from previous apps in the app store, the docker image rebuilds itself in a matter of minutes. If you accidentally have a container writing settings and data INSIDE the docker image, you need to fix that, as it will create other issues besides restoring data in the event you need to recreate the docker image file. The system folder contains the VM definitions as well, but the appdata backup app has provisions to save those. I'll put this information to use. Hopefully I can prevent future screw ups on my part. Thanks again for the assistance. Y'all are great! Quote Link to comment
offroadguy56 Posted August 14, 2023 Author Share Posted August 14, 2023 Well the cache went unmountable again. The zero-log rescue command did bring it back again. But since the first time it's only been working for less than a day. Any idea to fix this? or should I backup my data and just wipe the cache pool and start fresh? It shouldn't be because of a full drive, it has 800gb free. Now I do have disk4 that has a duplicate of my VM vdisk. This drive is 80gb free I wonder if that is screwing up the cache pool. Quote Link to comment
JorgeB Posted August 15, 2023 Share Posted August 15, 2023 9 hours ago, offroadguy56 said: Well the cache went unmountable again. The zero-log rescue command did bring it back again. This has been known to happen, i.e., quickly re-occurring, in this case I recommend backing up and re-formatting the pool. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.