x3rt Posted July 21, 2023 Share Posted July 21, 2023 (edited) Hello, I'm running Unraid v6.12.3. With valid Parity Intel Xeon E5-2470 v2 @ 2.40GHz | 10 Core 20 Thread 48 GiB DDR3 Multi-bit ECC I set up an NVMe drive yesterday to use as a cache pool. It has since filled up with 612GB of windows system backups. I had the mover scheduled for 4:30am and it is now nearly 11am. It is still at 612GB, and looking at my array activity; every 3 - 5 seconds it has less than a MB of activity. I even tried using Turbo Write with absolutely no difference. I'm considering just abandoning using cache but I would still need to actually have mover move the files at least this once. I have taken down all docker containers so the files aren't in use. This would take over 2 months to move... htop shows barely any cpu usage. With a normal file transfer to my Array it goes to about 150MB/s, but less than a MB every few seconds with mover is unusable. I'm at a loss. Also I've been waiting for the diagnostics to generate for 20 minutes I'll attach them once they generate. Edited July 21, 2023 by x3rt Quote Link to comment
x3rt Posted July 21, 2023 Author Share Posted July 21, 2023 (edited) Also just an update on the diagnostics... It has been generating for 30 minutes and it is starting to slow down my entire browser and I'm not sure what to do about that or whether I can safely close that browser window until it is done. The diagnostic browser window is currently using 5 GB of RAM & 50% of my CPU Update: Browser ran out of memory after generating Diagnostics for 50 minutes Edited July 21, 2023 by x3rt Quote Link to comment
x3rt Posted July 21, 2023 Author Share Posted July 21, 2023 (edited) I legitimately cannot generate diagnostic logs as it is trying to `sed` every single file in the backup. Using the CLI command does nothing as /boot/logs remain empty root@Mercury:~# diagnostics Starting diagnostics collection... I can try to manually zip up what it generated previously when I was using the WebGUI up until my browser window crashed Edited July 21, 2023 by x3rt Quote Link to comment
x3rt Posted July 21, 2023 Author Share Posted July 21, 2023 (edited) I managed to upload the in-progress diagnostic that failed. Not sure how complete/incomplete it is or if it's even anonymized properly mercury-diagnostics-20230721-1039-self (1).zip Edited July 21, 2023 by x3rt Quote Link to comment
x3rt Posted July 22, 2023 Author Share Posted July 22, 2023 Not sure what else I can provide really Quote Link to comment
Solution itimpi Posted July 22, 2023 Solution Share Posted July 22, 2023 Mover is very slow when moving small files as there is significant overhead due to checks done before moving each file. Not sure how big the files are that mover is operating on although the syslog seems to suggest it IS still doing something. Quote Link to comment
x3rt Posted July 22, 2023 Author Share Posted July 22, 2023 Alright. Guess I won't cache backups in the future. It's 65% done after 26h I'm still not clear on the 6.12 caching options. For example array->cache seems scary and unsafe and should never be done? Would that data be stuck on the cache? Is it always unprotected? Why would you do such a thing? Sorry if I'm being dumb. Quote Link to comment
itimpi Posted July 22, 2023 Share Posted July 22, 2023 31 minutes ago, x3rt said: For example array->cache seems scary and unsafe and should never be done? Would that data be stuck on the cache? Is it always unprotected? Why would you do such a thing? The 6.12 options are no different to earlier release - it is just that different terminology is now being used which is intended to be clearer for new users and also position things for new features in future release. The data IS stuck on the pool with array->cache. That is normally done for performance reasons with docker containers and/or VMs as pools have much higher write speeds than disks in the main array. It is also stuck on the pool if there is no secondary storage option set, but then you can activate "Exclusive Mode" which gives even better performance when writing to User Shares as it by-passes the Fuse layer normally involved in handling User Shares. You can then get redundancy/protection by making a pool multi-drive. You can also use plugins such as Appdata Backup to get periodic backups of the shares concerned to a nominate location (typically somewhere on the main array). Many people use both approaches. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.