Ustrombase Posted March 12, 2022 Share Posted March 12, 2022 I added an nvme drive and created a new pool specifically for vms and docker. I took the following steps after created the new pool: 1. turned off docker 2. turned off vm manager 3. went to terminal 4. started a mv command for system, appdata, and domains The results were fine except for appdata. I got some read only file system issues that prevented me to erase some files, but seemed all files got copied over and most got removed. Before I turned back the array on I made sure to change the cache drive in the share for system, appdata, and domains to use the new pool i created. After turning the array back on i went to shares and computed the size of the appdata share and now my appdata share is present in full size on both the cache pool and my new pool. How did this happen? Mover didn't run, i don't know how everything got populated again... So i am wondering how do I properly make sure everything is being used from my nvme pool and no appdata folder remains in my cache pool? I have 2 obstacles: some files are showing me read only when I use mv or rm -R with terminal as root AND when I turn the array back on i some how had my appdata folder get full once again. Quote Link to comment
Ustrombase Posted March 12, 2022 Author Share Posted March 12, 2022 (edited) So i am trying this and I am having issues with the last step Quote If the share in question should only have files present on the new pool, then change the share's settings to be use cache: prefer and re-run mover Mover will not run when i have "prefer" in the share... Edited March 12, 2022 by Ustrombase formatting Quote Link to comment
Ustrombase Posted March 12, 2022 Author Share Posted March 12, 2022 I just enabled logging for mover and tried running mover again, but nothing in the logs shows any issues just shows mover started and finished. The problem I have is that since I moved all files manually from cache pool to new pool for appdata there are files in both cache pools at the moment. I was expecting that this mover steps would somehow reconcile the duplicates and end up with ONLY files in the new pool Adding logs just for reference. edith-diagnostics-20220312-0746.zip Quote Link to comment
bonienl Posted March 12, 2022 Share Posted March 12, 2022 If you are running Unraid 6.10-rc3 then you can use the plugin Dynamix File Manager to move and delete files on the array, using the GUI. Quote Link to comment
Ustrombase Posted March 12, 2022 Author Share Posted March 12, 2022 Yea @bonienl unfortunately I’m not running that. Still in 6.9.2 because it’s my production server for my home stuff. Don’t like messing with that. Quote Link to comment
trurl Posted March 12, 2022 Share Posted March 12, 2022 9 hours ago, Ustrombase said: my appdata share is present in full size on both the cache pool and my new pool. How did this happen? Do any of your containers specify /mnt/cache/appdata instead of /mnt/user/appdata? Quote Link to comment
Ustrombase Posted March 12, 2022 Author Share Posted March 12, 2022 @bonienlif I use midnight commander should I delete any of the two appdata folders? If so which one? current set up is on step 5 so my appdata share has prefer one for the new pool. When I ran mover it didn’t work and it seems expected given the new pool already has an appdata folder. I feel options right now are: 1. Re try the steps I attached given by @Squidbut that would entail would need to: 1. Set appdata share to “yes” and cache to old pool 2. Rerun mover 3. erase appdata in new pool 4. Set cache in appdata to “prefer” and pointing to new pool 5. Run mover again Note: difference this time from last time is because I deleted the appdata folder in the new pool mover should work. 2. delete my appdata from the old cache pool given the new pool seems to have all the data judging purely from the size of the appdata folder on my new pool vs the old. The issues here is last time this didn’t prove fruitful as using the move Linux command I got read only file errors. Does anyone know if midnight command does “better” moving. I know I sound stupid asking that as perhaps it is doing the same Linux command in the backend but just prefer asking than assuming. Quote Link to comment
Ustrombase Posted March 12, 2022 Author Share Posted March 12, 2022 4 minutes ago, trurl said: Do any of your containers specify /mnt/cache/appdata instead of /mnt/user/appdata? Right now I can’t double check this but I did look at this last night and didn’t see any reference to the cache itself at least for those containers that were default to turn on automatically. Krusader might have reference to it but it is not auto turned on. Quote Link to comment
trurl Posted March 12, 2022 Share Posted March 12, 2022 15 hours ago, Ustrombase said: 4. started a mv command for system, appdata, and domains Can you give details, source and destination paths? Quote Link to comment
Ustrombase Posted March 12, 2022 Author Share Posted March 12, 2022 @trurl for sure! Sorry if I left it out. Didn’t know it be helpful. basically when I did the mv command it was from the default pool which is “mnt/cache/“ to the new pool which is “mnt/appdata_vms” I moved system, appdata and domains. the appdata did not do a remove for some files after the copy since mv is equal to a copy + remove. the files that did not move were giving me “rm: cannot remove `X’: Read-only file system” is this helpful? Quote Link to comment
trurl Posted March 12, 2022 Share Posted March 12, 2022 19 minutes ago, Ustrombase said: Read-only file system source read-only, which can happen if corruption is detected Mar 11 23:14:39 EDITH kernel: BTRFS: error (device dm-3) in __btrfs_free_extent:3092: errno=-2 No such entry Mar 11 23:14:39 EDITH kernel: BTRFS info (device dm-3): forced readonly Quote Link to comment
Ustrombase Posted March 12, 2022 Author Share Posted March 12, 2022 So does this mean my SSD has a corrupted file system? Do you happen to know why it would only happen to part of the appdata folder? Do you know how I can fix this? Quote Link to comment
Ustrombase Posted March 12, 2022 Author Share Posted March 12, 2022 @trurlso looking at this article I think it seems that it could be a byproduct of a RAM issue. I had a RAM in January that took me a month to fix purely because of delivery of my new RAM but it could be that during the time the RAM had an issue it corrupted my cache? Would you concur as a possible situation? Quote Link to comment
trurl Posted March 15, 2022 Share Posted March 15, 2022 On 3/12/2022 at 6:53 PM, Ustrombase said: RAM in January that took me a month to fix You shouldn't even attempt to run a computer with bad RAM. On 3/12/2022 at 6:53 PM, Ustrombase said: RAM had an issue it corrupted my cache? Would you concur as a possible situation? Of course. Everything goes through RAM. Your data, all executable code, everything. Quote Link to comment
Ustrombase Posted March 15, 2022 Author Share Posted March 15, 2022 Yea so while I wouldn’t call myself a noob I am no expert and maybe this is such a low level thing I should have known but in either case I didn’t know. also I don’t think I explained it well. My issues presented themselves as dockers being turned off which at first glance from research it led me to think it was a docker image issue. Then through the help of the community it was diagnosed it was a RAM issue which causes me to immediately turn off my server and I fixed it. However it never occurred to me that it would corrupt data which now makes total sense. anywho I have now since formatted my cache drive Pool and everything is all good now. thanks for the help! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.